Settings Results in 4 milliseconds

How to Optimize SQL Query in SQL Server
Category: Other

There are several ways to tune the performance of an SQL query. Here are a few tips < ...


Views: 0 Likes: 9
SqlException: A connection was successfully establ ...
Category: .Net 7

Question How do you solve the error that says "SqlException A connection was ...


Views: 0 Likes: 31
What does COALESCE do in SQL
Category: SQL

In SQL, the `COALESCE` function is used to return the first non-null expression in a list of expr ...


Views: 0 Likes: 15
 The .NET Stacks #62: ?? And we&#x27;re back
The .NET Stacks #62 ?? And we&#x27;re back

This is the web version of my weekly newsletter, The .NET Stacks, originally sent to email subscribers on September 13, 2021. Subscribe at the bottom of the post to get this right away!Happy Monday! Miss me? A few of you said you have, but I'm 60% sure that's sarcasm.As you know, I took the last month or so off from the newsletter to focus on other things. I know I wasn't exactly specific on why, and appreciate some of you reaching out. I wasn't comfortable sharing it at the time, but I needed to take time away to focus on determining the next step in my career. If you've interviewed lately, I'm sure you understand ... it really is a full-time job.  I'm happy to say I've accepted a remote tech lead role for a SaaS company here. I'm rested and ready, so let's get into it! I'm trying something a little different this week—feel free to let me know what you think.?? My favorite from last weekASP.NET 6.0 Minimal APIs, why should you care?Ben FosterWe've talked about Minimal APIs a lot in this newsletter and it's quite the hot topic in the .NET community. An alternative way to write APIs in .NET 6 and beyond, there's a lot of folks wondering if it's suitable for production, or can lead to misuse. Ben notes "Minimal simply means that it contains the minimum set of components needed to build HTTP APIs ... It doesn’t mean that the application you build will be simple or not require good design.""I find that one of the biggest advantages to Minimal APIs is that they make it easier to build APIs in an opinionated way. After many years building HTTP services, I have a preferred approach. With MVC I would replace the built-in validation with Fluent Validation and my controllers were little more than a dispatch call to Mediatr. With Minimal APIs I get even more control. Of course if MVC offers everything you need, then use that."In a similar vein, Nick Chapsas has a great walkthrough on strategies for building production-ready Minimal APIs. No one expects your API to be in one file, and he shows practical ways to deal with dependencies while leveraging minimal API patterns. Damian Edwards has a nice Twitter thread, as well. As great as these community discussions are, I really think the greatest benefit is getting lost the performance gains.?? Community and eventsIncreasing developer happiness with GitHub code scanningSam PartingtonIf you work in GitHub, you probably already know that GitHub utilizes code scanning to find security vulnerabilities and errors in your repository. Sam Partington writes about something you might not know they use CodeQL—their internal code analysis engine—to protect themselves from common coding mistakes. Here's what Sam says about loopy performance issues "In addition to protecting against missing error checking, we also want to keep our database-querying code performant. N+1 queries are a common performance issue. This is where some expensive operation is performed once for every member of a set, so the code will get slower as the number of items increases. Database calls in a loop are often the culprit here; typically, you’ll get better performance from a batch query outside of the loop instead.""We created a custom CodeQL query ... We filter that list of calls down to those that happen within a loop and fail CI if any are encountered. What’s nice about CodeQL is that we’re not limited to database calls directly within the body of a loop?calls within functions called directly or indirectly from the loop are caught too."You can check out the post for more details and learn how to use these queries or make your own.More from last weekSimon Bisson writes about how to use the VS Code editor in your own projects.The Netflix Tech Blog starts a series on practical API design and also starts writing about their decision-making process.The .NET Docs Show talks about micr0 frontends with Blazor.For community standups, Entity Framework talks about OSS projects, ASP.NET has an anniversary, .NET MAUI discusses accessibility, and Machine Learning holds office hours.?? Web developmentHow To Map A Route in an ASP.NET Core MVC applicationKhalid AbuhakmehIf you're new to ASP.NET Core web development, Khalid put together a nice post on how to add an existing endpoint to an existing ASP.NET Core MVC app. Even if you aren't a beginner, you might learn how to resolve sticky routing issues. At the bottom of the post, he has a nice checklist you should consider when adding a new endpoint.More from last weekBen Foster explores custom model binding with Minimal APIs in .NET 6.Thomas Ardal debugs System.FormatException when launching ASP.NET Core.Jeremy Morgan builds a small web API with Azure Functions and SQLite.Ed Charbeneau works with low-code data grids and Blazor.Scott Hanselman works with a Minimal API todo app.?? The .NET platformUsing Source Generators with Blazor components in .NET 6Andrew LockWhen Andrew was upgrading a Blazor app to .NET 6, he found that source generators that worked in .NET 5 failed to discover Blazor components in his .NET 6 app because of changes to the Razor compilation process.He writes "The problem is that my source generators were relying on the output of the Razor compiler in .NET 5 ... My source generator was looking for components in the compilation that are decorated with [RouteAttribute]. With .NET 6, the Razor tooling is a source generator, so there is no 'first step'; the Razor tooling executes at the same time as my source generator. That is great for performance, but it means the files my source generator was relying on (the generated component classes) don't exist when my generator runs."While this is by design, Andrew has a great post underlying the issue and potential workarounds.More from last weekMark Downie writes about his favorite improvements in .NET 6.Sergey Vasiliev writes about optimizing .NET apps.Pawel Szydziak writes cleaner, safer code with SonarQube, Docker, and .NET Core.Sam Basu writes about how to develop for desktop in 2022, and also about developing for .NET MAUI on macOS.Paul Michaels manually parses a JSON string using System.Text.Json.Johnson Towoju writes logs to SQL Server using NLog.Andrew Lock uses source generators with Blazor components in .NET 6.Rick Strahl launches Visual Studio Code cleanly from a .NET app.Jirí Cincura calls a C# static constructor multiple times.? The cloudMinimal Api in .NET 6 Out Of Process Azure FunctionsAdam StorrWith all this talk about Minimal APIs, Adam asks can I use it with the new out-of-process Azure Functions model in .NET 6?He says "Azure Functions with HttpTriggers are similar to ASP.NET Core controller actions in that they handle http requests, have routing, can handle model binding, dependency injection etc. so how could a 'Minimal API' using Azure Functions look?"More from last weekDamien Bowden uses Azure security groups in ASP.NET Core with an Azure B2C identity provider.Jon Gallant works with the ChainedTokenCredential in the Azure Identity library.Adam Storr uses .NET 6 Minimal APIs with out-of-process Azure Functions.?? ToolsNew Improved Attach to Process Dialog ExperienceHarshada HoleWith the 2022 update, Visual Studio is improving the debugging experience—included is a new Attach to Process dialog experience.Harshada says "We have added command-line details, app pool details, parent/child process tree view, and the select running window from the desktop option in the attach to process dialog. These make it convenient to find the right process you need to attach. Also, the Attach to Process dialog is now asynchronous, making it interactive even when the process list is updating." The post walks through these updates in detail.More from last weekJeremy Likness looks at the EF Core Azure Cosmos DB provider.Harshada Hole writes about the new Attach to Process dialog experience in Visual Studio.Ben De St Paer-Gotch goes behind the scenes on Docker Desktop.Esteban Solano Granados plays with .NET 6, C# 10, and Docker.?? Design, testing, and best practicesShip / Show / Ask A modern branching strategyRouan WilsenachRouan says "Ship/Show/Ask is a branching strategy that combines the features of Pull Requests with the ability to keep shipping changes. Changes are categorized as either Ship (merge into mainline without review), Show (open a pull request for review, but merge into mainline immediately), or Ask (open a pull request for discussion before merging)."More from last weekLiana Martirosyan writes about enabling team learning and boost performance.Sagar Nangare writes about measuring user experience in modern applications and infrastructure.Neal Ford and Mark Richards talk about the hard parts of software architecture.Derek Comartin discusses event-sourced aggregate design.Steve Smith refactors to value objects.Sam Milbrath writes about holding teams accountable without micromanaging.Helen Scott asks how can you stay ahead of the curve as a developer?Rouan Wilsenach writes about a ship / show / ask branching strategy.Jeremy Miller writes about integration Testing using the IHost Lifecycle with xUnit.Net.?? Podcasts and VideosServerless Chats discusses serverless for beginners.The .NET Core Show talks about DotPurple With Michael Babienco.The Changelog talks to a lawyer about GitHub Copilot.Technology and Friends talks to Sam Basu about .NET MAUI.Visual Studio Toolbox talks about Web Live Preview.The ASP.NET Monsters talk about new Git commands.Adventures in .NET talk about Jupyter notebooks.The On .NET Show migrates apps to modern authentication and processes payments with C# and Stripe.


The multi-part identifier "inserted.Id" could not ...
Category: SQL

This error message typically occurs in SQL Server when you're trying to use a column value from a ...


Views: 0 Likes: 40
Keyword or statement option 'bulkadmin' is not sup ...
Category: SQL

Question I am getting SQL Server Error Keyword or statement option 'bulkadmin' is not supported ...


Views: 0 Likes: 47
Incorrect syntax near the keyword 'with'
Category: SQL

Question Incorrect syntax near the keyword 'with'. If this statement is a common table expressio ...


Views: 0 Likes: 30
What does TRIM function do in T-SQL
Category: Research

Title Understanding the TRIM Function in T-SQL A Comprehensive Guide with Code Examples and Vi ...


Views: 0 Likes: 39
Connect to Another Sql Data Engine Through Interne ...
Category: SQL

I had installed an SQL Data Engine and a Microsoft SQL Server Management Studio on my other computer ...


Views: 297 Likes: 79
What’s new with identity in .NET 8
What’s new with identity in .NET 8

In April 2023, I wrote about the commitment by the ASP.NET Core team to improve authentication, authorization, and identity management in .NET 8. The plan we presented included three key deliverables New APIs to simplify login and identity management for client apps like Single Page Apps (SPA) and Blazor WebAssembly. Enablement of token-based authentication and authorization in ASP.NET Core Identity for clients that can’t use cookies. Improvements to documentation. All three deliverables will ship with .NET 8. In addition, we were able to add a new identity UI for Blazor web apps that works with both of the new rendering modes, server and WebAssembly. Let’s look at a few scenarios that are enabled by the new changes in .NET 8. In this blog post we’ll cover Securing a simple web API backend Using the new Blazor identity UI Adding an external login like Google or Facebook Securing Blazor WebAssembly apps using built-in features and components Using tokens for clients that can’t use cookies Let’s look at the simplest scenario for using the new identity features. Basic Web API backend An easy way to use the new authorization is to enable it in a basic Web API app. The same app may also be used as the backend for Blazor WebAssembly, Angular, React, and other Single Page Web apps (SPA). Assuming you’re starting with an ASP.NET Core Web API project in .NET 8 that includes OpenAPI, you can add authentication with a few steps. Identity is “opt-in,” so there are a few packages to add Microsoft.AspNetCore.Identity.EntityFrameworkCore – the package that enables EF Core integration A package for the database you wish to use, such as Microsoft.EntityFrameworkCore.SqlServer (we’ll use the in-memory database for this example) You can add these packages using the NuGet package manager or the command line. For example, to add the packages using the command line, navigate to the project folder and run the following dotnet commands dotnet add package Microsoft.AspNetCore.Identity.EntityFrameworkCore dotnet add package Microsoft.EntityFrameworkCore.InMemory Identity allows you to customize both the user information and the user database in case you have requirements beyond what is provided in the .NET Core framework. For our basic example, we’ll just use the default user information and database. To do that, we’ll add a new class to the project called MyUser that inherits from IdentityUser class MyUser IdentityUser {} Add a new class called AppDbContext that inherits from IdentityDbContext<MyUser> class AppDbContext(DbContextOptions<AppDbContext> options) IdentityDbContext<MyUser>(options) { } Providing the special constructor makes it possible to configure the database for different environments. To set up identity for an app, open the Program.cs file. Configure identity to use cookie-based authentication and to enable authorization checks by adding the following code after the call to WebApplication.CreateBuilder(args) builder.Services.AddAuthentication(IdentityConstants.ApplicationScheme) .AddIdentityCookies(); builder.Services.AddAuthorizationBuilder(); Configure the EF Core database. Here we’ll use the in-memory database and name it “AppDb.” It’s usedhere for the demo so it is easy to restart the application and test the flow to register and login (each run will start with a fresh database). Changing to SQLite will save users between sessions but requires the database to be properly created through migrations as shown in this EF Core getting started tutorial. You can use other relational providers such as SQL Server for your production code. builder.Services.AddDbContext<AppDbContext>( options => options.UseInMemoryDatabase("AppDb")); Configure identity to use the EF Core database and expose the identity endpoints builder.Services.AddIdentityCore<MyUser>() .AddEntityFrameworkStores<AppDbContext>() .AddApiEndpoints(); Map the routes for the identity endpoints. This code should be placed after the call to builder.Build() app.MapIdentityApi<MyUser>(); The app is now ready for authentication and authorization! To secure an endpoint, use the .RequireAuthentication() extension method where you define the auth route. If you are using a controller-based solution, you can add the [Authorize] attribute to the controller or action. To test the app, run it and navigate to the Swagger UI. Expand the secured endpoint, select try it out, and select execute. The endpoint is reported as 404 - not found, which is arguably more secure than reporting a 401 - not authorized because it doesn’t reveal that the endpoint exists. Now expand /register and fill in your credentials. If you enter an invalid email address or a bad password, the result includes the validation errors. The errors in this example are returned in the ProblemDetails format so your client can easily parse them and display validation errors as needed. I’ll show an example of that in the standalone Blazor WebAssembly app. A successful registration results in a 200 - OK response. You can now expand /login and enter the same credentials. Note, there are additional parameters that aren’t needed for this example and can be deleted. Be sure to set useCookies to true. A successful login results in a 200 - OK response with a cookie in the response header. Now you can rerun the secured endpoint and it should return a valid result. This is because cookie-based authentication is securely built-in to your browser and “just works.” You’ve just secured your first endpoint with identity! Some web clients may not include cookies in the header by default. If you are using a tool for testing APIs, you may need to enable cookies in the settings. The JavaScript fetch API does not include cookies by default. You can enable them by setting credentials to the value include in the options. Similarly, an HttpClient running in a Blazor WebAssembly app needs the HttpRequestMessage to include credentials, like the following request.SetBrowserRequestCredential(BrowserRequestCredentials.Include); Next, let’s jump into a Blazor web app. The Blazor identity UI A stretch goal of our team that we were able to achieve was to implement the identity UI, which includes options to register, log in, and configure multi-factor authentication, in Blazor. The UI is built into the template when you select the “Individual accounts” option for authentication. Unlike the previous version of the identity UI, which was hidden unless you wanted to customize it, the template generates all of the source code so you can modify it as needed. The new version is built with Razor components and works with both server-side and WebAssembly Blazor apps. The new Blazor web model allows you to configure whether the UI is rendered server-side or from a client running in WebAssembly. When you choose the WebAssembly mode, the server will still handle all authentication and authorization requests. It will also generate the code for a custom implementation of AuthenticationStateProvider that tracks the authentication state. The provider uses the PersistentComponentState class to pre-render the authentication state and persist it to the page. The PersistentAuthenticationStateProvider in the client WebAssembly app uses the component to synchronize the authentication state between the server and browser. The state provider might also be named PersistingRevalidatingAuthenticationStateProvider when running with auto interactivity or IdentityRevalidatingAuthenticationStateProvider for server interactivity. Although the examples in this blog post are focused on a simple username and password login scenario, ASP.NET Identity has support for email-based interactions like account confirmation and password recovery. It is also possible to configure multifactor authentication. The components for all of these features are included in the UI. Add an external login A common question we are asked is how to integrate external logins through social websites with ASP.NET Core Identity. Starting from the Blazor web app default project, you can add an external login with a few steps. First, you’ll need to register your app with the social website. For example, to add a Twitter login, go to the Twitter developer portal and create a new app. You’ll need to provide some basic information to obtain your client credentials. After creating your app, navigate to the app settings and click “edit” on authentication. Specify “native app” for the application type for the flow to work correctly and turn on “request email from users.” You’ll need to provide a callback URL. For this example, we’ll use https//localhost5001/signin-twitter which is the default callback URL for the Blazor web app template. You can change this to match your app’s URL (i.e. replace 5001 with your own port). Also note the API key and secret. Next, add the appropriate authentication package to your app. There is a community-maintained list of OAuth 2.0 social authentication providers for ASP.NET Core with many options to choose from. You can mix multiple external logins as needed. For Twitter, I’ll add the AspNet.Security.OAuth.Twitter package. From a command prompt in the root directory of the server project, run this command to store your API Key (client ID) and secret. dotnet user-secrets set "TwitterApiKey" "<your-api-key>" dotnet user-secrets set "TWitterApiSecret" "<your-api-secret>" Finally, configure the login in Program.cs by replacing this code builder.Services.AddAuthentication(IdentityConstants.ApplicationScheme) .AddIdentityCookies(); with this code builder.Services.AddAuthentication(IdentityConstants.ApplicationScheme) .AddTwitter(opt => { opt.ClientId = builder.Configuration["TwitterApiKey"]!; opt.ClientSecret = builder.Configuration["TwitterApiSecret"]!; }) .AddIdentityCookies(); Cookies are the preferred and most secure approach for implementing ASP.NET Core Identity. Tokens are supported if needed and require the IdentityConstants.BearerScheme to be configured. The tokens are proprietary and the token-based flow is intended for simple scenarios so it does not implement the OAuth 2.0 or OIDC standards. What’s next? Believe it or not, you’re done. This time when you run the app, the login page will automatically detect the external login and provide a button to use it. When you log in and authorize the app, you will be redirected back and authenticated. Securing Blazor WebAssembly apps A major motivation for adding the new identity APIs was to make it easier for developers to secure their browser-based apps including Single Page Apps (SPA) and Blazor WebAssembly. It doesn’t matter if you use the built-in identity provider, a custom login or a cloud-based service like Microsoft Entra, the end result is an identity that is either authenticated with claims and roles, or not authenticated. In Blazor, you can secure a razor component by adding the [Authorize] attribute to the component or to the page that hosts the component. You can also secure a route by adding the .RequireAuthorization() extension method to the route definition. The full source code for this example is available in the Blazor samples repo. The AuthorizeView tag provides a simple way to handle content the user has access to. The authentication state can be accessed via the context property. Consider the following <p>Welcome to my page!</p> <AuthorizeView> <Authorizing> <div class="alert alert-info">We're checking your credentials...</div> </Authorizing> <Authorized> <div class="alert alert-success">You are authenticated @context.User.Identity?.Name</div> </Authorized> <NotAuthorized> <div class="alert alert-warning">You are not authenticated!</div> </NotAuthorized> </AuthorizeView> The greeting will be shown to everyone. In the case of Blazor WebAssembly, when the client might need to authenticate asynchronously over API calls, the Authorizing content will be shown while the authentication state is queried and resolved. Then, based on whether or not you’ve authenticated, you’ll either see your name or a message that you’re not authenticated. How exactly does the client know if you’re authenticated? That’s where the AuthenticationStateProvider comes in. The App.razor page is wrapped in a CascadingAuthenticationState provider. This provider is responsible for tracking the authentication state and making it available to the rest of the app. The AuthenticationStateProvider is injected into the provider and used to track the state. The AuthenticationStateProvider is also injected into the AuthorizeView component. When the authentication state changes, the provider notifies the AuthorizeView component and the content is updated accordingly. First, we want to make sure that API calls are persisting credentials accordingly. To do that, I created a handler named CookieHandler. public class CookieHandler DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { request.SetBrowserRequestCredentials(BrowserRequestCredentials.Include); return base.SendAsync(request, cancellationToken); } } In Program.cs I added the handler to the HttpClient and used the client factory to configure a special client for authentication purposes. builder.Services.AddTransient<CookieHandler>(); builder.Services.AddHttpClient( "Auth", opt => opt.BaseAddress = new Uri(builder.Configuration["AuthUrl"]!)) .AddHttpMessageHandler<CookieHandler>(); Note the authentication components are opt-in and available via the Microsoft.AspNetCore.Components.WebAssembly.Authentication package. The client factory and extension methods come from Microsoft.Extensions.Http. The AuthUrl is the URL of the ASP.NET Core server that exposes the identity APIs. Next, I created a CookieAuthenticationStateProvider that inherits from AuthenticationStateProvider and overrides the GetAuthenticationStateAsync method. The main logic looks like this var unauthenticated = new ClaimsPrincipal(new ClaimsIdentity()); var userResponse = await _httpClient.GetAsync("manage/info"); if (userResponse.IsSuccessStatusCode) { var userJson = await userResponse.Content.ReadAsStringAsync(); var userInfo = JsonSerializer.Deserialize<UserInfo>(userJson, jsonSerializerOptions); if (userInfo != null) { var claims = new List<Claim> { new(ClaimTypes.Name, userInfo.Email), new(ClaimTypes.Email, userInfo.Email) }; var id = new ClaimsIdentity(claims, nameof(CookieAuthenticationStateProvider)); user = new ClaimsPrincipal(id); } } return new AuthenticationState(user); The user info endpoint is secure, so if the user is not authenticated the request will fail and the method will return an unauthenticated state. Otherwise, it builds the appropriate identity and claims and returns the authenticated state. How does the app know when the state has changed? Here is what a login looks like from Blazor WebAssembly using the identity API async Task<AuthenticationState> LoginAndGetAuthenticationState() { var result = await _httpClient.PostAsJsonAsync( "login?useCookies=true", new { email, password }); return await GetAuthenticationStateAsync(); } NotifyAuthenticationStateChanged(LoginAndGetAuthenticationState()); When the login is successful, the NotifyAuthenticationStateChanged method on the base AuthenticationStateProvider class is called to notify the provider that the state has changed. It is passed the result of the request for a new authentication state so that it can verify the cookie is present. The provider will then update the AuthorizeView component and the user will see the authenticated content. Tokens In the rare event your client doesn’t support cookies, the login API provides a parameter to request tokens. An custom token (one that is proprietary to the ASP.NET Core identity platform) is issued that can be used to authenticate subsequent requests. The token is passed in the Authorization header as a bearer token. A refresh token is also provided. This allows your application to request a new token when the old one expires without forcing the user to log in again. The tokens are not standard JSON Web Tokens (JWT). The decision behind this was intentional, as the built-in identity is meant primarily for simple scenarios. The token option is not intended to be a fully-featured identity service provider or token server, but instead an alternative to the cookie option for clients that can’t use cookies. Not sure whether you need a token server or not? Read a document to help you choose the right ASP.NET Core identity solution. Looking for a more advanced identity solution? Read our list of identity management solutions for ASP.NET Core. Docs and samples The third deliverable is documentation and samples. We have already introduced new documentation and will be adding new articles and samples as we approach the release of .NET 8. Follow Issue #29452 – documentation and samples for identity in .NET 8 to track the progress. Please use the issue to communicate additional documentation or samples you are looking for. You can also link to the specific issues for various documents and provide your feedback there. Conclusion The new identity features in .NET 8 make it easier than ever to secure your applications. If your requirements are simple, you can now add authentication and authorization to your app with a few lines of code. The new APIs make it possible to secure your Web API endpoints with cookie-based authentication and authorization. There is also a token-based option for clients that can’t use cookies. Learn more about the new identity features in the ASP.NET Core documentation. The post What’s new with identity in .NET 8 appeared first on .NET Blog.


 The .NET Stacks #32: ?? SSR is cool again
The .NET Stacks #32 ?? SSR is cool again

Good morning and happy Monday! We’ve got a few things to discuss this weekThe new/old hotness HTML over the wireXamarin.Forms 5.0 released this weekQuick break how to explaining C# string interpolation to the United States SenateLast week in the .NET worldThe new/old hotness server-side renderingOver the holidays, I was intrigued by the release of the Hotwire project, from the folks at BasecampHotwire is an alternative approach to building modern web applications without using much JavaScript by sending HTML instead of JSON over the wire. This makes for fast first-load pages, keeps template rendering on the server, and allows for a simpler, more productive development experience in any programming language, without sacrificing any of the speed or responsiveness associated with a traditional single-page application.Between this and other tech such as Blazor Server, the “DOM over the wire” movement is in full force. It’s a testament to how bloated and complicated the front end has become.Obviously, rendering partial HTML over the wire isn’t anything new at all—especially to us .NET developers—and it’s sure to bring responses like “Oh, you mean what I’ve been doing the last 15 years?” As much as I enjoy the snark, it’s important to not write it off as the front-end community embracing what we’ve become comfortable with, as the technical details differ a bit—and we can learn from it. For example, it looks like instead of Hotwire working with DOM diffs over the wire, it streams partial updates over WebSocket while dividing complex pages into separate components, with an eye on performance. I wonder how Blazor Server would have been architected if this was released 2 years ago.Xamarin.Forms 5.0 released this weekThis week, the Xamarin team released the latest stable release of Xamarin.Forms, version 5.0, which will be supported through November 2022. There’s updates for App Themes, Brushes, and SwipeView, among other things. The team had a launch party. Also, David Ramel writes that this latest version drops support for Visual Studio 2017. Updates to Android and iOS are only delivered to 2019, and pivotal for getting the latest updates from Apple and Google.2021 promises to be a big year for Xamarin, as they continue preparing to join .NET 6—as this November, Xamarin.Forms evolves into MAUI (the .NET Multi-Platform App UI). This means more than developing against iPhones and Android devices, of course. With .NET 6 this also includes native UIs for iOS, Android, and desktops. As David Ramel also writes, Linux will not be supported out of the gate and VS Code support will be quite limited.As he also writes, in a community standup David Ortinau clarifies that MAUI is not a rewrite.So my hope and expectation, depending on the complexity of your projects, is you can be up and going within days … It’s not rewrites – it’s not a rewrite – that’s probably the biggest message that I should probably say over and over and over again. You’re not rewriting your application.Quick break how to explain C# string interpolation to the United States SenateDid I ever think C# string interpolation would make it to the United States Senate? No, I most certainly did not. But last month, that’s what happened as former Cybersecurity and Infrastructure Security Agency (CISA) head Chris Krebs explained a bugIt’s on page 20 … it says ‘There is no permission to {0}’. … Something jumped out at me, having worked at Microsoft. … The election-management system is coded with the programming language called C#. There is no permission to {0}’ is placeholder for a parameter, so it may be that it’s just not good coding, but that certainly doesn’t mean that somebody tried to get in there a 0. They misinterpreted the language in what they saw in their forensic audit.It appears that the election auditors were scared by something like thisConsole.WriteLine("There is no permission to {0}"); To us, we know it’s just a log statement that verifies permission checks are working. It should have been coded using one of the following lines of codeConsole.WriteLine("There is no permission to {0}", permission); Console.WriteLine($"There is no permission to {permission}"); I’m available to explain string interpolation to my government for a low, low rate of $1000 an hour. All they had to do was ask.?? Last week in the .NET world?? The Top 4Josef Ottosson works with polymorphic deserialization with System.Text.Json.Shahed Chowdhuri works with init-only setters in C# 9.Khalid Abuhakmeh writes about EF Core 5 interceptors.Over at the AWS site, the folks at DraftKings have a nice read about modernizing with .NET Core and AWS.?? AnnouncementsWinUI 3 Preview 3 has been released.David Ortinau announces the arrival of Xamarin.Forms 5.0.Microsoft Learn has a new module on learning Python.James Newton-King releases a new Microsoft doc, Code-first gRPC services and clients with .NET.Phillip Carter brings attention to a living F# coding conventions document.Patrick Svensson releases version 0.37 of Spectre.Console.The Azure SDK team released new .NET packages to simplify migrations using Newtonsoft.Json and/or Microsoft.Spatial.The EF Core team releases EFCore.NamingConventions 5.0.1, which fixes issues with owned entities and table splitting in 5.0.0.?? Community and eventsChris Noring introduces GitHub’s web dev for beginners tutorials.Niels Swimberghe rolls out two utilities written in Blazor a GZIP compressor/decompressor, and a .NET GUID generator.ErikEJ writes about some free resources for EF 5.The Xamarin community standup is a launch party for Xamarin.Forms 5.The .NET Docs Show talks to co-host David Pine about his localization project.Shahed Chowdhuri previews a new C# A-Z project and a Marvel cinematic visualization app.Chris Woodruff kicks of an ASP.NET 5 Web API blog series.VS Code Day is slated for January 27.?? Web developmentPeter Vogel writes about displaying lists efficiently in Blazor.Over at Code Maze, using the API gateway pattern in .NET to encapsulate microservices.David Fowler notes that web socket compression is coming to .NET 6.Chris Noring manages configuration in ASP.NET Core.Marinko Spasojevic signs in with Google using Angular and ASP.NET Core Web API.Damien Bowden works with Azure AD access token lifetime policy management in ASP.NET Core.Paul Michaels views server variables in ASP.NET Core.Sahan Serasinghe writes about using Web Sockets with ASP.NET Core.?? The .NET platformRichard Reedy talks about the Worker Service in .NET Core.Marco Minerva develops desktop apps with .NET 5.Jimmy Bogard works with ActivitySource and ActivityListener in .NET 5.Nikola Zivkovic introduces machine learning with ML.NET.Nick Randolph works with missing files in a multi-targeted project.Stefan Koell writes about migrating Royal TS from WinForms to .NET 5.? The cloudAndrew Lock auto-assigns issues using a GitHub Action.Richard Reedy builds a chatbot to order a pizza.Dave Brock uses the Microsoft Bot Framework to analyze emotion with the Azure Face API.Jonathan Channon uses GCP Cloud Functions with F#.Justin Yoo writes about using Azure EventGrid.Mark Heath writes about bulk uploading files to Azure Blob Storage with the Azure CLI.Daniel Krzyczkowski continues his series on writing an ASP.NET Core API secured by Azure AD B2C.Paul Michaels schedules message delivery with Azure Service Bus.?? LanguagesRick Strahl works with blank zero values in .NET number format strings.David McCarter analyzes code for issues in .NET 5.Khalid Abuhakmeh plays audio files with .NET.Daniel Bachler talks about what he wishes he knew when learning F#.Michal Niegrzybowski writes about signaling in WebRTC with Ably and Fable.Mark-James McDougall talks about why he’s learning F# in 2021.?? ToolsJason Robert creates a serverless Docker image.Stephen Cleary kicks off a series around asynchronous messaging.Michal Bialecki recaps useful SQL statements when writing EF Core migrations.Derek Comartin splits up a monolith into microservices.Frank Boucher creates a CI/CD deployment solution for a Docker project.Alex Orlov writes about using TLS 1.3 for IMAP and SMTP connections through Mailbee.NET.Brad Beggs writes about using vertical rulers in VS Code.Tim Cochran writes about maximizing developer effectiveness.?? XamarinDavid Ramel writes how Xamarin.Forms won’t be on Linux or VS Code for MAUI in .NET 6, and also mentions that Xamarin.Forms 5 is dropping Visual Studio 2017 support.Leomaris Reyes writes about Xamarin Essentials.Anbu Mani works with infinite scrolling in Xamarin.Forms.Matthew Robbins embeds a JS interpreter into Xamarin apps with Jint.?? PodcastsScott Hanselman talks to Amanda Silver about living through 2020 as a remote developer.The 6-Figure Developer Podcast talks with Phillip Carter about F# and functional programming.The Azure DevOps Podcast talks with Sam Nasr about SQL Server for developers.?? VideosVisual Studio Toolbox talks about the Azure App Insights Profiler.The ASP.NET Monsters talk with Andrew Stanton-Nurse.Gerald Versluis secures a Xamarin app with fingerprint or face recognition.James Montemagno makes another Xamarin.Forms 101 video.At Technology and Friends, David Giard talks to Javier Lozano about virtual conferences.Jeff Fritz works on ASP.NET Core MVC and also APIs with ASP.NET Core.ON.NET discusses cross-platform .NET development with OmniSharp.


How do you perform Math in SQL
Category: Research

Math in SQL is an essential skill for anyone working with databases. It allows you to manipulate ...


Views: 0 Likes: 28
How do you use CONCAT_WS function in T-SQL
Category: Research

Title Mastering String Concatenation in T-SQL with CONCAT_WS FunctionIn the realm ...


Views: 0 Likes: 35
Dew Drop – June 21, 2023 (#3969)
Dew Drop – June 21, 2023 (#3969)

Top Links Introducing the New T4 Command-Line Tool for .NET (Mike Corsaro) How to use GitHub Copilot Prompts, tips, and use cases (Rizel Scarlett) How to Hide Your Angular Properties – # vs private Explained (Deborah Kurata) Improved .NET Debugging Experience with Source Link (Patrick Smacchia) 7 Things about C# Running Apps (Joe Mayo) Web & Cloud Development Run OpenTelemetry on Docker (B. Cameron Gain) How Much Will It Hurt? The 10 Things You Need to Do to Migrate Your MVC/Web API App to ASP.NET Core (Peter Vogel) Node v16.20.1 (LTS) and Node v20.3.1 (Current) and Node v18.16.1 (LTS) (Rafael Gonzaga) Service to check if application browser tab is active or not (silfversparre) New W3C website deployed (Coralie Mercier) How to persist Postman variables (Joyce) Dependent Stack Updates with Pulumi Deployments (Komal Ali) Detecting Scene Changes in Audiovisual Content (Avneesh Saluja, Andy Yao & Hossein Taghavi) Exploring the Exciting New Features of TypeScript 5.0 and 5.1 (Suprotim Agarwal) What is an API endpoint? (Postman Team) WinUI, .NET MAUI & XAML .NET MAUI + GitHub Actions + Commas in Certificate Names (Mitchel Sellers) Visual Studio & .NET Integer compression Implementing FastPFor decoding in C# (Oren Eini) Permutations of a String in C# (Matjaz Prtenjak) Using StringBuilder To Replace Values (Khalid Abuhakmeh) Create your own Mediator (like Mediatr) (Steven Giesel) Microsoft Forms Service’s Journey to .NET 6 (Ray Yao) Why is Windows using only even-numbered processors? (Raymond Chen) JetBrains Toolbox App 2.0 Beta Streamlines Installation and Improves Integrations (Victor Kropp) Design, Methodology & Testing One critical skill for a Scrum Master and why? (Martin Hinshelwood) Top 6 AI Coding Assistants in 2023 (Fimber Elemuwa) Big-O Notation and Complexity Analysis (Kirupa Chinnathambi) Cleaning up files changed by a GitHub Action that runs in a container (Rob Bos) To improve as an engineer, get better at requesting (and receiving) feedback (Chelsea Troy) Mobile, IoT & Game Development Get started developing mixed reality for Meta Quest 3 with Unity (Kevin Semple) Screencasts & Videos Technology & Friends – Alex Mattoni on Cycle.io (David Giard) FreeCodeSession – Episode 463 (Jason Bock) What I Wish I Knew… about interviewing for jobs (Leslie Richardson) Podcasts CodeNewbie S24E7 – Navigating Layoffs with Intention (Natalie Davis) (CodeNewbie Team) The Rework Podcast – Buckets of Time (Jason Fried & David Heinemeier Hansson) What It Takes To Be A Web Developer Part 2 – JavaScript Jabber 587 (AJ O’Neal & Dan Shappir) Python Bytes Podcast #341 – Shhh – For Secrets and Shells (Michael Kennedy) Tools and Weapons Podcast – First Vice President Nadia Calviño Architecting Spain’s AI future (Brad Smith) RunAs Radio – Windows Update for Business with Aria Carley (Richard Campbell) Defense Unicorns, A Podcast – Learning from Your Peers with Tracy Gregorio (Rob Slaughter) Community & Events Juneteenth Conference Comes to Chicago (David Giard) Celebrating Tech Trailblazers for Juneteenth (Daniel Ikem) Stack Overflow’s 2023 developer survey Are developers using AI? (Esther Shein) What Does Gen Z Want at Work? The Same Things You Wanted Once Upon a Time (Katie Bartlet) Meet the Skilling Champion Priyesh Wagh (Rie Moriguchi) Things to Do in Philadelphia This Week & Weekend (Visit Philly) The Next Phase of Eleventy Return of the Side Project (Zach Leatherman) Database SQL SERVER – Resolving Deadlock by Accessing Objects in the Same Order (Pinal Dave) The Right Tools for Optimizing Azure SQL Managed Instance Performance (Rie Merritt) Latest features in Azure Managed Instance for Apache Cassandra (Theo van Kraay) T-SQL Tuesday #163 – Career Advice I received (Tracy Boggiano) Miscellaneous Electronic Signatures 2023 Legal Aspects (Bjoern Meyer) Releasing Windows 11 Build 22621.1926 to the Release Preview Channel (Brandon LeBlanc) Windows 11 Moment 3 Heads to the Release Preview Channel (Paul Thurrott) Microsoft CEO Satya Nadella and many Xbox executives are set to defend its FTC case (Tom Warren) More Link Collections The Morning Brew #3731 (Chris Alcock) Sands of MAUI Issue #108 (Sam Basu) Daily Reading List – June 20, 2023 (#107) (Richard Seroter) The Geek Shelf  Learn WinUI 3 (Alvin Ashcraft)


Introduction to Entity Framework Core
Introduction to Entity Framework Core

In this post I am going to look into Entity Framework Core, present the new features, the similarities and differences from EF6. ? hope that the people who will read this post are familiar with what EF is. In a nutshell EF is the official data access technology platform from Microsoft. It is a  mature platform since it is almost 9 years old. It is a new way of accessing and storing data, a set of .NET APIs for accessing data from any data store. Going into more details Entity Framework fits into a category of data access technologies called Object Relational Mappers or ORMs. ORMs reduce the friction between how data is structured in a relational database and how you define your classes. Without an ORM, we typically have to write a lot of code to transform database results into instances of the classes. An ORM allows us to express our queries using our classes, and then the ORM builds and executes the relevant SQL for us, as well as materializing objects from the data that came back from the database. ?y using an ORM we can really eliminate redundant data interaction tasks and we can really enhance developer productivity. Instead of writing the relevant SQL to target whatever relational database you’re working with, Entity Framework uses the LINQ syntax that’s part of the .NET framework. LINQ to Entities allows developers to use strongly-typed query language regardless of which database they’re targeting. When we use EF Core in your application, firstly you need to create your domain classes. These are pure .Net classes or objects and have nothing to do with EF Core.Then you use Entity Framework Core APIs to define a data model based on those domain classes. You also use Entity Framework APIs to write and execute LINQ to Entities Queries against those classes. When you need to save data back to the database you use SaveChanges methods. EF Core keeps track of the state of objects that it’s aware of, it’ll determine the SQL it needs to save data back to the database. Entity Framework will transform your LINQ to Entities Queries into SQL, execute that SQL, and then create objects from query results. As you many know, Entity Framework development was moved to CodePlex and became open source.  Entity Framework 6 has since then been moved to GitHub. The URL is github.com/aspnet/EntityFramework. You can download and experiment with the latest version, you can clone the repository, add your own fixes/features, and then submit those as pull requests to become part of Entity Framework Core. All pull requests that are submitted from the community are examined from the ADO.Net team before becoming part of EF Core. Another thing I wanted to mention is that different people call EF with different names. Let me be clear on that so that there is no confusion. EF Core was released in late June of 2016 and it was called Entity Framework 7, but in January 2016, its name was changed to Entity Framework Core. At the same time ASP.NET 5 was changed to ASP.NET Core. EF Core is not simply an update from EF6, it’s a different kind of Entity Framework, it is a complete rewrite. For developers who have invested time and effort on EF6 and have projects on EF6 should not worry as it will be actively supported. Entity Framework Core can run  on .NET Core. NET Core  can run on the CoreCLR and CoreCLR can run natively, not only in Windows, but also on Mac and Linux. EF Core can also run inside the full .NET Framework that is any version that is 4.5.1 or newer. Entity Framework Core is a brand new set of APIs and it doesn’t have all of the features that you might be used to with Entity Framework so it’s important to understand that before starting a  new project with Entity Framework Core. If you want to target cross platform or UWP apps then you have no other way but to use Entity Framework Core. For .NET apps that you want to run on Windows, you can still use Entity Framework 6. If you are building an ASP.NET Core applications that will run on Windows you can still use Entity Framework 6, so bear that in mind as well. There are Entity Framework features that will never be part of the Entity Framework Core, for example in EF Core there is no support for a designer-based model, there’s no EDMX and it isn’t supported by the Entity Framework Designer. Having said that in EF Core you can can still define a data model with classes and a DbContext. The DbContext API is still there and so is DbSets. You can also create and migrate to the database as your model changes, and you can still query with LINQ to Entities. Entity Framework continues to track changes to entities in memory. There are some new features in  EF Core that there were no available in earlier versions. In EF Core we have the ability to do batch inserts, updates, and deletes. We can specify unique foreign keys in entities and LINQ queries have become smarter and more efficient. In EF Core there is an In Memory provider that makes it really easy to build automated tests using Entity Framework without hitting the database. EF Core makes it really easy to use with inversion of control patterns and dependency injection. EF Core has the ability to populate backing fields not just properties. EF Core supports mapping to IEnumerables. Hope you have a better understanding of EF Core now. Hope it helps!!!  


SQL table Design Best Practices
Category: SQL

When working with SQL tables, sometimes it is frustrating to find no columns with "Date" the data wa ...


Views: 280 Likes: 101
 The .NET Stacks #18: RC1 is here, the fate of .NET Standard, and F# with Isaac Abraham
The .NET Stacks #18 RC1 is here, the fate of .NE ...

.NET 5 RC1 is hereThis week, Microsoft pushed out the RC1 release for .NET 5, which is scheduled to officially “go live” in early November. RC1 comes with a “go live” license, which means you get production support for it. With that, RC1 versions were released for ASP.NET Core and EF Core as well.I’ve dug deep on a variety of new features in the last few months or so—I won’t  rehash them here. However, the links are worth checking out. For example, Richard Lander goes in-depth on C# 9 records and System.Text.Json.The fate of .NET StandardWhile there are many great updates to the upcoming .NET 5 release, a big selling point is at a higher level the promise of a unified SDK experience for all of .NET. The idea is that you’ll be able to use one platform regardless of your needs—whether it’s Windows, Linux, macOS, Android, WebAssembly, and more. (Because of internal resourcing constraints, Xamarin will join the party in 2021, with .NET 6.)Microsoft has definitely struggled in communicating a clear direction for .NET the last several years, so when you pair a unified experience with predictable releases and roadmaps, it’s music to our ears.You’ve probably wondered what does this mean for .NET Standard? The unified experience is great, but what about when you have .NET Framework apps to support? (If you’re new to .NET Standard, it’s more-or-less a specification where you can target a version of Standard, and all .NET implementations that target it are guaranteed to support all its .NET APIs.)Immo Landwerth shed some light on the subject this week. .NET Standard is being thrown to the .NET purgatory with .NET Framework it’ll still technically be around, and .NET 5 will support it—but the current version, 2.1, will be its last.As a result, we have some new target framework names net5.0, for apps that run anywhere, combines and replaces netcoreapp and netstandard. There’s also net5.0-windows (with Android and iOS flavors to come) for Windows-specific use cases, like UWP.OK, so .NET Standard is still around but we have new target framework names. What should you do? With .NET Standard 2.0 being the last version to support .NET Framework, use netstandard2.0 for code sharing between .NET Framework and other platforms. You can use netstandard2.1 to share between Mono, Xamarin, and .NET Core 3.x, and then net5.0 for anything else (and especially when you want to use .NET 5 improvements and new language features). You’ll definitely want to check out the post for all the details.What a mess .NET Standard promised API uniformity and now we’re even having to choose between that and a new way of doing things. The post lays out why .NET Standard is problematic, and it makes sense. But when you’re trying to innovate at a feverish pace but still support customers on .NET Framework, the cost is complexity—and the irony is that with uniformity with .NET 5, that won’t apply when you have legacy apps to support.Dev Discussions Isaac AbrahamAs much as we all love C#, there’s something that needs reminding from time to time C# is not .NET. It is a large and important part of .NET, for sure, but .NET also supports two other languages Visual Basic and F#. As for F#, it’s been gaining quite a bit of popularity over the last several years, and for good reason it’s approachable, concise, and allows you to embrace a functional-first language while leveraging the power of the .NET ecosystem.I caught up with Isaac Abraham to learn more about F#. After spending a decade as a C# developer, Isaac embraced the power of F# and founded Compositional IT, a functional-first consultancy. He’s also the author of Get Programming with F# A Guide for .NET Developers.I know it’s more nuanced than this but if you could sell F# to C# developers in a sentence or two, how would you do it?F# really does bring the fun back into software development. You’ll feel more productive, more confident and more empowered to deliver high-quality software for your customers.Functional programming is getting a lot of attention in the C# world, as the language is adopting much of its concepts (especially with C# 9). It’s a weird balance trying to have functional concepts in an OO language. How do you feel the balance is going?I have mixed opinions on this. On the one hand, for the C# dev it’s great—they have a more powerful toolkit at their disposal. But I would hate to be a new developer starting in C# for the first time. There are so many ways to do things now, and the feature (and custom operator!) count is going through the roof. More than that, I worry that we’ll end up with a kind of bifurcated C# ecosystem—those that adopt the new features and those that won’t, and worse still the risk of losing the identity of what C# really is.I’m interested to see how it works out. Introducing things like records into C# is going to lead to some new and different design patterns being used that will have to naturally evolve over time.I won’t ask if C# will replace F#—you’ve eloquently written about why the answer is no. I will ask you this, though is there a dividing line of when you should use C# (OO with functional concepts) or straight to F#?I’m not really sure the idea of “OO with functional concepts” really gels, to be honest. Some of the core ideas of FP—immutability and expressions—are kind of the opposite of OO, which is all centered around mutable data structures, statements and side effects. By all means use the features C# provides that come from the FP world and use them where it helps—LINQ, higher order functions, pattern matching, immutable data structures—but the more you try out those features to try what they can do without using OO constructs, the more you’ll find C# pulls you “back.” It’s a little like driving an Audi on the German motorway but never getting out of third gear.My view is that 80% of the C# population today—maybe more—would be more productive and happier in F#. If you’re using LINQ, you favour composition over inheritance, and you’re excited by some of the new features in C# like records, switch expressions, tuples, and so on, F# will probably be a natural fit for you. All of those features are optimised as first-class citizens of the language, whilst things like mutability and classes are possible, but are somewhat atypical.This also feeds back to your other question—I do fear that people will try these features out within the context of OO patterns, find them somehow limited, and leave thinking that FP isn’t worthwhile.Let’s say I’m a C# programmer and want to get into F#. Is there any C# knowledge that will help me understand the concepts, or is it best to clear my mind of any preconceived notions before learning?Probably the closest concept would be to imagine your whole program was a single LINQ query. Or, from a web app—imagine every controller method was a LINQ query. In reality it’s not like that, but that’s the closest I can think of. The fact that you’ll know .NET inside and out is also a massive help. The things to forget are basically the OO and imperative parts of the language classes, inheritance, mutable variables, while loops, and statements. You don’t really use any of those in everyday F# (and believe me, you don’t need any of them to write standard line of business apps).As an OO programmer, it’s so painful always having to worry about “the billion dollar mistake” nulls. We can’t assume anything since we’re mutating objects all over the place and often throw up our hands and do null checks everywhere (although the language has improved in the last few versions). How does F# handle nulls? Is it less painful?For F# types that you create, the language simply says null isn’t allowed, and there’s no such thing as null. So in a sense, the problem goes away by simply removing it from the type system. Of course, you still have to handle business cases of “absence of a value,” so you create optional values—basically a value that can either have something or nothing. The compiler won’t let you access the “something” unless you first “check” that the value isn’t nothing.So, you spend more time upfront thinking about how you model your domain rather than simply saying that everything and anything is nullable. The good thing is, you totally lose that fear of “can this value be null when I dot into it” because it’s simply not part of the type system. It’s kind of like the flow analysis that C# 8 introduced for nullability checks—but instead of flow analysis, it’s much simpler. It’s just a built-in type in the language. There’s nothing magical about it.However, when it comes to interoperating with C# (and therefore the whole BCL), F# doesn’t have any special compiler support for null checks, so developers will often create a kind of “anti-corruption” layer between the “unsafe outside world” and the safe F# layer, which simply doesn’t have nulls. There’s also work going on to bring in support for the nullability surface in the BCL but I suspect that this will be in F# 6.F#, and functional programming in general, emphasizes purity no side effects. Does F# enforce this, or is it just designed with it in mind?No, it doesn’t enforce it. There’s some parts of the language which make it obvious when you’re doing a side effect, but it’s nothing like what Haskell does. For starters, the CLR and BCL don’t have any notion of a side effect, so I think that this would difficult to introduce. It’s a good example of some of the design decisions that F# took when running on .NET—you get all the goodness of .NET and the ecosystem, but some things like this would be challenging to do. In fact, F# has a lot of escape hatches like this. It strongly guides you down a certain path, but it usually has ways that you can do your own thing if you really need to.You still can (and people do) write entire systems that are functionally pure, and the benefits of pure functions are certainly something that most F# folks are aware of (it’s much easier to reason about and test, for example). It just means that the language won’t force you to do it.What is your one piece of programming advice?Great question. I think one thing I try to keep in mind is to avoid premature optimisation and design. Design systems for what you know is going to be needed, with extension points for what will most likely be required. You can never design for every eventuality, and you’ll sometimes get it wrong, that’s life—optimise for what is the most likely outcome.To read the entire interview, head on over to my site.?? Last week in the .NET world?? The Top 3.NET 5 RC 1 is out Richard Lander has the announcement, Jeremy Likness talks about EF updates, and Daniel Roth discusses what’s new for ASP.NET.Immo Landwerth speaks about the future of .NET Standard.Steve Gordon walks through performance optimizations.?? AnnouncementsThere’s a new Learn module for deploying a cloud-native ASP.NET microservice with GitHub Actions.Mark Downie talks about disassembly improvements for optimized managed debugging.Microsoft Edge announces source order viewer in their DevTools.Tara Overfield provides September cumulative updates for the .NET Framework.?? Community and eventsMicrosoft Ignite occurs this Tuesday through Thursday.The .NET Docs Show talks about the dot.net site with Maíra Wenzel.Three .NET community standups this week .NET Tooling finds latent bugs in .NET 5, Entity Framework talks EF Core 5 migrations, and ASP.NET discusses new features for .NET API developers.?? ASP .NET / BlazorShaun Curtis launches a series on building a database application in Blazor.Patrick Smacchia walks through the architecture of a C# game rendered with Blazor, Xamarin, UWP, WPF, and Winforms.David Ramel writes about increased Blazor performance in .NET 5 RC1.Rick Strahl warns about missing await calls for async code in ASP.NET Code middleware.Dominique St-Amand secures an ASP.NET Core Web API with an API key.Vladimir Pecanac discusses how to secure sensitive data locally with ASP.NET Core.David Grace explores why you app might not be working in IIS.?? .NET CoreKay Ewbank discusses the latent bug discovery feature coming with .NET 5.Michal Bialecki executes raw SQL with EF 5.Fredrik Rudberg serves images stored in a database through static URLs using .NET Core 3.1.Shawn Wildermuth talks about hosting Vue in .NET Core.? The cloudVladimir Pecanac configures the Azure Key Vault in ASP.NET Core.Richard Seroter compares the CLI experience between Azure, AWS, and GCP.Jon Gallant walks though the September updates to the Azure SDKs.Christopher Scott introduces the new Azure Tables client libraries.Daniel Krzyczkowski extracts Excel file content with Azure Logic Apps and Azure Functions.Kevin Griffin touts the great performance for Azure Static Web Apps and Azure Functions.Matt Small finds a gotcha you can’t use an Azure Key Vault firewall if you’re in a situation where you’re using App Gateway along with a Key Vault certificate for SSL termination.Gunnar Peipman hosts applications on Azure B-series virtual machines.?? C#Jeremy Clark shows how to see all the exceptions when calling “await Task.WhenAll.”.Jerome Laban uses MSBuild items and properties in C# 9 source generators.?? F#A nice rundown of 10 ways to try F# in the browser.Daniel Bykat talks about the PORK framework and its use with F#.Alican Demirtas discusses string interpolation in F#.Paul Biggar talks about his async adventures.?? ToolsDerek Comartin does a review of MediatR.Tom Deseyn uses OpenAPI with .NET Core.John Juback builds cross-platform desktop apps with Electron.NET.Andrew Lock continues his k8s series by deploying applications with Helm.You can now debug Linux core dumps on the Windows Subsystem for Linux (WSL) or a remote Linux system directly from Visual Studio.Adam Storr uses Project Tye to run .NET worker services.?? XamarinJoe Meyer wires up a fullscreen video background.Khalid Abuhakmeh animates a mic drop.Denys Fiediaiev uses MvvmCross to log with Xamarin.?? PodcastsThe .NET Rocks podcast talks about ML with Zoiner Tejada.Software Engineering Radio talks with Philip Kiely about writing for software developers.The Merge Conflict podcast discusses the new Half type.The Coding Blocks podcast asks is Kubernetes programming?The Azure DevOps Podcast talks with Steve Sanderson about Blazor.?? VideosThe ON .NET Show talks about Steeltoe configuration.Azure Friday talks about Azure landing zones.Scott Hanselman gives us a primer on the cloud.The ASP.NET Monsters send dates from JavaScript to .NET.


The code execution cannot proceed because msodbcsq ...
Category: Other

Question The code execution cannot proceed because msodbcsql17.dll was not found. Reinstalling t ...


Views: 0 Likes: 14
How to Transfer Database from Sql Server 2012 to S ...
Category: SQL

Problem&nbsp; &nbsp; I needed to transfer the database and the sche ...


Views: 193 Likes: 68
SQL Server Tips and Tricks
Category: SQL

Error Debugging Did you know you could double click on the SQL Error and ...


Views: 0 Likes: 44
ExecuteDataReader Error: An error occured while at ...
Category: SQL

Question How do I solve the ExecuteDataReader Error that says System.DataException. An error occ ...


Views: 0 Likes: 29
 The .NET Stacks #8: functional C# 9, .NET Foundation nominees, Azure community, more!
The .NET Stacks #8 functional C# 9, .NET Foundat ...

This is an archive of my weekly (free!) newsletter, -The .NET Stacks-. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.On tap this weekC# 9 a functionally better releaseThe .NET Foundation nominees are out!Dev Discussions Michael CrumpCommunity roundupC# 9 a functionally better releaseI’ve been writing a lot about C# 9 lately. No, seriously a lot. This week I went a little nuts with three posts I talked about records, pattern matching, and top-level programs. I’ve been learning a ton, which is always the main goal, but what’s really interesting is how C# is starting to blur the lines between object-oriented and functional programming. Throughout the years, we’ve seen some FP concepts visit C#, but I feel this release is really kicking it up a notch.In the not-so-distant past, discussing FP and OO meant putting up with silly dogmatic arguments that they have to be mutually exclusive. It isn’t hard to understand why traditional concepts of OO constructs are grouping data and behavior (state) in single mutable objects, and FP draws a hard line between data and behavior in the name of purity and minimizing side effects (immutability by default).So, typically as a .NET developer, this left you with two choices C#, .NET’s flagship language, or F#, a wonderful functional language that is concise (no curlies or semi-colons and great type inference), convenient (functions as first-class objects), and has default immutability.However, this is no longer a binary choice. For example, let’s look at a blog post from a few years ago that maps C# concepts to F# concepts.C#/OO has variables, F#/FP has immutable values. C# 9 init-only properties and records bring that ability to C#.C# has statements, F# has expressions. C# 8 introduced switch expressions and enhanced pattern matching, and has more expressions littered throughout the language now.C# has objects with methods, F# has types and functions. C# 9 records are also blurring the line in this regard.So here we are, just years after wondering if F# will ever take over C#, we see people wondering the exact opposite as Isaac Abraham asks will C# replace F#? (Spoiler alert no.)There is definitely pushback in the community from C# 8 purists, to which I say why not both? You now have the freedom to “bring in” the value of functional programming, while doing it in a familiar language. You can bring in these features, along with C#’s compatibility. These changes will not break the language. And if they don’t appeal to you, you don’t have to use them. (Of course, mixing FP and OO in C# is not always graceful and is definitely worth mentioning.)This isn’t a C# vs F# rant, but it comes down to this is C# with functional bits “good enough” because of your team’s skillset, comfort level, and OO needs? Or do you need a clean break, and immutability by default? As for me, I enjoy seeing these features gradually introduced. For example, C# 9 records allow you to build immutable structures but the language isn’t imposing this on you for all your objects. You need to opt in.A more nuanced question to ask is will C#’s functional concepts ever overpower the language and tilt the scales in FP’s direction? Soon, I’ll be interviewing Phillip Carter (the PM for F# at Microsoft) and am curious to hear what he has to say about it. Any questions? Let me know soon and I’ll be sure to include them.The .NET Foundation nominees are outThis week, the .NET Foundation announced the Board of Director nominees for the 2020 campaign. I am familiar with most of these folks (a few are subscribers, hi!)—it’s a very strong list and you probably can’t go wrong with anyone. I’d encourage you to look at the list and all their profiles to see who you’d like to vote for (if you are a member). If not, you can apply for membership. Or, if you’re just following the progress of the foundation, that’s great too.I know I’ve talked a lot about the Foundation lately, but this is an important moment for the .NET Foundation. The luster has worn off and it’s time to address the big questions what exactly is the Foundation responsible for? Where is the line between “independence” and Microsoft interests? When OSS projects collide with Microsoft interests, what is the process to work through it? And will the Foundation commit itself to open communication and greater transparency?As for me, these are the big questions I hope the nominees are thinking about, among other things.Dev Discussions Michael CrumpIf you’ve worked on Azure, you’ve likely come across Michael Crump’s work. He started Azure Tips and Tricks, a collection of tips, videos, and talks—if it’s Azure, it’s probably there. He also runs a popular Twitch stream where he talks about various topics.I caught up with Michael to talk about how he got to working on Azure at Microsoft, his work for the developer community, and his programming advice.My crack team of researchers tell me that you were a former Microsoft Silverlight MVP. Ah, memories. Do you miss it?Ah, yes. I was a Microsoft MVP for 4 years, I believe. I spent a lot of time working with Silverlight because, at that time, I was working in the medical field and a lot of our doctors used Macs. Since I was a C# WinForms/WPF developer, I jumped at the chance to start using those skillsets for code that would run on PCs and Macs.you walk me through your path to Microsoft, and what you do at Microsoft now?I started in Mac tech support because after graduating college, Mac tech support agents were getting paid more than PC agents (supply and demand, I guess!). Then, I was a full-time software developer for about 8 years. I worked in the medical field and created a calculator that determined what amount of vitamins our pre-mature babies should take.Well, after a while, the stress got to me and I discovered my love for teaching and started a job at Telerik as a developer advocate. Then, the opportunity came at Microsoft for a role to educate and inspire application developers. So my role today consists of developer content in many forms, and helping to set our Tier 1 event strategy for app developers.Tell us a little about Azure Tips and Tricks. What motivated you to get started, and how can people get involved?Azure Tips and Tricks was created because I’d find a thing or two about Azure, and forget how to do it again. It was originally designed as something just for me but many blog aggregators starting picking up on the posts and we decided to go big with it—from e-books, blog posts, videos, conference talks and stickers.The easiest way to contribute is by clicking on the Edit Page button at the bottom of each page. You can also go to http//source.azuredev.tips to learn more.What made you get into Twitch? What goes on in your channel?I loved the ability to actually code and have someone watch you and help you code. The interactivity aspect and seeing the same folks come back gets you hooked.The stream is broken down into three streams a weekAzure Tips and Tricks, every Wednesday at 1 PM PST (Pacific Standard Time, America)Live Interviews with Developers, every Friday at 9 AM PST (Pacific Standard Time, America)Live coding/Security Sunday streams, Sundays at 1030 AM PST (Pacific Standard Time, America)What is your one piece of programming advice?I actually published a list of my top 12 things every developer should know.My top one would probably be to learn a different programming language (other than your primary language). Simply put, it broadens your perspective and permits a deeper understanding of how a computer and programming languages work.This is only an excerpt of my talk with Michael. Read the full interview over at my website.Community roundupAn extremely busy week, full of great content!MicrosoftAnnouncementsAKS now supports confidential workloads.The Edge team announces the storage access API.Microsoft introduces the Text Analytics for Health APIs.Pratik Nadagouda talks about updates to the Git experience in Visual Studio.Eric Boyd shows off new Azure Cognitive Services capabilities.The .NET Foundation has the nominees set for the 2020 campaign.VideosThe Visual Studio Toolbox begins a series on performance profiling and continues their series on finding code in Visual Studio.The Xamarin Show discusses App Center and App Insights.Data Exposed continues their “why Azure SQL is best for devs” series.So many community standups we have the Languages & Runtime one, Entity Framework, and ASP.NET Core.Blog postsJason Freeberg continues his Zero to Hero with App Service series.Miguel Ramos dives into WinUI 3 in desktop apps.Jayme Singleton runs through the .NET virtual events in July.Leonard Lobel highlights the Azure CosmosDB change feed.Community BlogsASP.NET CoreChristian Nagel walks through local users with ASP.NET Core.Andrew Lock continues talking about adding an endpoint graph to ASP.NET Core..Thomas Ardal adds Razor runtime compilation for ASP.NET Core.Anthony Giretti exposes proto files in a lightweight gRPC service.Neel Bhatt introduces event sourcing in .NET Core.Dominick Baier talks about flexible access token validation in ASP.NET Core.The .NET Rocks podcast talks about ASP.NET Core Endpoints with Steve Smith.The ON.NET show discusses SignalR.BlazorWael Kdouh secures a Blazor WebAssembly Application With Azure Active Directory.Jon Hilton discusses Blazor validation logic on the client and the server.Marinko Spasojevic consumes a web API with Blazor WebAssembly.Matthew Jones continues writing Minesweeper for Blazor Web Assembly.Entity FrameworkKhalid Abuhakmeh talks about adding custom database functions for EF Core.Jon P. Smith discusses soft deleting data with Global Query Filters in EF Core.LanguagesDave Brock goes deep on C# 9, with records, pattern matching, and top-level programs.Ian Griffiths continues his series on C# 8 nullable references with conditional post-conditions..Khalid Abuhakmeh walks through reading from a file in C#.AzureJoseph Guadagno uses Azure Key Vault to secure Azure Functions (hi, Joe!).Visual Studio Magazine walks through Azure Machine Learning Studio Web.Damien Bowden walks through using external inputs in Azure Durable Functions.Azure Tips and Tricks has new content about Azure certifications for developers.Jason Gaylord discusses adding Azure feature flags to client applications.XamarinSimon Bisson pontificates on .NET MAUI and the future of Xamarin.Leomaris Reyes uses biometric identification in Xamarin.Forms.Kym Phillpotts creates a pizza shop in Xamarin.Forms.The Xamarin Podcast discusses Xamarin.Forms 4.7 and other topics.ToolsJetBrains introduces the .NET Guide.Jason Gaylord shows us how to create and delete branches in Visual Studio Code.Mike Larah uses custom browser configurations with Visual Studio.Muhammad Rehan Saeed shows us how to publish NuGet packages quickly.Patrick Smacchia talks about his top 10 Visual Studio refactoring tips.Bruce Cottman asks if GitHub Actions will kill off Jenkins.ProjectsOren Eini releases CosmosDB Profiler 1.0.Manuel Grundner introduces his new Tasty project, an effort to bring the style of his favorite test frameworks to .NET.Community podcasts and videosScott Hanselman shows off his Raspberry Pi and shows off how you can run .NET Notebooks and .NET Interactive on it, talks about Git pull requests, shows off Git 101 basics, and walks through setting up a prompt with Git, Windows Terminal, PowerShell, and Cascadia Code.The ASP.NET Monsters talk about NodaTime and API controllers.The 6-Figure Developer podcast talks about AI with Matthew Renze.The No Dogma podcast talks with Bill Wagner about .NET 5 and unifying .NET.The Coding Blocks podcast studies The DevOps Handbook.New subscribers and feedbackHas this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!


Docker Container Micro-Service Error: Can not Conn ...
Category: Docker

Problem Can not Connect to SQL Server in Docker Container from Microsoft Sql Server Management</ ...


Views: 257 Likes: 90
Sql Outer Join Slow Query
Category: SQL

when thinking of using an Outer Join on a select query in T-SQL, you should always think about perfo ...


Views: 294 Likes: 83
Writing Tips for Improving Your Pull Requests
Writing Tips for Improving Your Pull Requests

You’ve just finished knocking out a complex feature. You’re happy with the state of the code, you’re a bit brain-fried, and the only thing between you and the finish line is creating a pull request. You’re not going to leave the description field blank, are you? You’re tired, you want to be done, and can’t people just figure out what you did by looking at the code? I get it. The impulse to skip the description is strong, but a little effort will go a long way toward making your coworker’s lives easier when they review your code. It’s courteous, and–lucky for you!–it doesn’t have to be hard. If you’re thinking I’m going to suggest writing a book in the description field, you’re wrong. In fact, I’m going to show you how to purposely write less by using the techniques below. Make it Scannable If your code is a report for the board of directors, your pull request description is the executive summary. It should be short and easy to digest while packing in as much important information as possible. The best way to achieve this combination is to make the text scannable. You can use bold or italic text to draw emphasis to important details in a paragraph. However, the best way to increase scan-ability is the liberal application of bulleted lists. Most of my PR descriptions start like this If merged, this PR will Add a Widget model Add a controller for performing CRUD on Widgets Update routes.rb to include paths for Widgets Update user policies to ensure only admins can delete Widgets Add tests for policy changes … There are a few things to note here. I’m using callouts to bring attention to important changes, including the object that’s being added and important files that are being modified. The sentences are short and digestible. They contain one useful piece of information each. And, for readability, they all start with a capital letter and end with no punctuation. Consistency of formatting makes for easier reading. Speak Plainly Simpler words win if you’re trying to quickly convey meaning, and normal words are preferable to jargon. Here are a few examples * Replace utilize with use. They have different meanings, and you’re likely wanting the meaning of use, which has the added bonus of being fewer characters. * Replace ask with request. “The ask here is to replace widget A with widget B.” Ask is not a noun; it’s a verb. * Replace operationalize with do. A savings of 12 characters and 5 syllables! There are loads of words that we use daily that could be replaced with something simpler; I bet you can think of a few off the top of your head. For more examples, see my book recommendations at the end of this article. Avoid Adverbs Piggybacking on the last suggestion, adverbs can often be dropped to tighten up your prose. Spotting an adverb is easy. Look for words that end in -ly. Really, vastly, quickly, slowly–these are adverbs and they usually can be removed without changing the meaning of your sentence. Here’s an example “Replace a really slowly performing ActiveRecord query with a faster raw SQL query” “Replace a slow ActiveRecord query with a faster raw SQL query” Since we dropped the adverbs, performing doesn’t work on its own, so we can remove it and save even more characters. Simplify Your Sentences Sentences can sometimes end up unnecessarily bloated. Take this example “The reason this is marked DO NOT MERGE is because we’re missing the final URL for the SSO login path.” The reason this is can be shortened to simply this is. The is before because is unnecessary and can be removed. And the last part of the sentence can be rejiggered to be more direct while eliminating an unnecessary prepositional phrase. The end result is succinct “This is marked DO NOT MERGE because we’re missing the SSO login path’s production URL.” Bonus Round Avoid Passive Voice Folks tend to slip into passive voice when talking about bad things like bugs or downtime. Uncomfortable things make people want to ensure they’re dodging–or not assigning–blame. I’m not saying you should throw someone under the bus for a bug, but it helps to be direct when writing about your code. “We were asked to implement the feature that caused this bug by the sales team.” The trouble here is were asked. This makes the sentence sound weak. Luckily, a rewrite is easy “The sales team asked us to implement the feature that caused this bug.” By moving the subject from the end of the sentence to the beginning, we ditch an unnecessary prepositional phrase by the sales team, shorten the sentence, and the overall meaning is now clear and direct. There’s More! But we can’t cover it all here. If you want to dig deeper, I recommend picking up The Elements of Style. It’s a great starting point for improving your writing. Also, Junk English by Ken Smith is a fun guide for spotting and avoiding jargon, and there’s a sequel if you enjoy it. The post Writing Tips for Improving Your Pull Requests appeared first on Simple Thread.


Sql Optimize Query to Run Faster
Category: SQL

How to Optimiz ...


Views: 322 Likes: 99
The INSERT statement conflicted with the FOREIGN K ...
Category: SQL

Question How do you resolve the error that says&nbsp;The INSERT stateme ...


Views: 0 Likes: 30
Full Stack Mid-Level PHP Developer for Ed Tech (Remote Full-time or Contract) at RocketLit | Y Combinator
Full Stack Mid-Level PHP Developer for Ed Tech (Re ...

Hi there! We're looking for a full-stack mid-level to senior web developer with an expertise in PHP frameworks for a full-time position at RocketLit, our science literacy and assessment company. We've been around for 6+ years and are Summer 2016 YC Alum. Our new platform, InnerOrbit, features the highest quality NGSS Assessments, phenomena, and 3D reports and we believe we are on the forefront of science learning for the classroom. We've gained a ton of traction in the past year with InnerOrbit and have been growing fast and somewhat virally. # What you'll be building Depending on your expertise and what you enjoy, you could be working in the realms of backend development and dev ops, front end and UX, or even on the technology side of content and marketing. We work with a lot of large data sets and need to build out APIs as well as work with external APIs for things like rostering and other integrations. So, if backend is your thing, you'll be able to thrive there. At the same time, we very much value slick UX -- and not the boring, obvious kind. We have some pretty advanced UIs that we strive to make intuitive, while offering a ton of function. There are UX puzzles to solve and if you're into that, we're your team to join. On top of that, our reports are pretty snazzy and we try to not just present data graphically, but represent data in ways that actually makes sense and is helpful for every user (by using several graphing libraries). If content and marketing is something you love, but from a developer standpoint, we've got a lot of that going on too... from email campaigns, to interfacing with data tracking and analytics APIs, to creating fantastic pages to communicate with users about all of our new features. # What we can offer you - Competitive annual salary starting at 70k - 90k - 401k with matching - 100% Remote work (though a few in-person meetings here and there could be cool too) - Very flexible schedules - A quickly growing, small, fun, and driven team to work with # Qualified candidates will have - 3+ years of Mid-Level Full-Stack Developer, particularly with PHP - 1+ years experience with Laravel other PHP frameworks - Ability to create/ work with custom and legacy php frameworks - Vanilla PHP (without a framework) - API integrations - MySQL/Maria/Postgres - Ability to read and write Vanilla SQL queries including Joins, subqueries, and aggregate functions - Vanilla javascript - CSS and experience with SCSS/LESS/SASS - 2+ years of Git experience # Nice to haves include - OAuth Experience - Jquery - 1+ year of React/Angular/Vue Experience - Google cloud / AWS / Linode / Linux experience - Some Photoshop Experience - D3js / NVD3 / ChartJS - Understanding of SEO principles - Understanding of UX principles # How you can apply Please send your resume to with the subject "Full-stack Awesomeness" along with any portfolios, projects you've been working on, and your github. A little bit about yourself wouldn't hurt either.


Microsoft SQL Server, Error: 258
Category: SQL

Error A network-related or instance-specific error occurred while establishing ...


Views: 492 Likes: 102
Cannot create a row of size which is greater than ...
Category: Other

Question How do you solve this error, "Cannot create a row of size which is greater than the all ...


Views: 0 Likes: 8
Login failed for user . (Microsoft SQL, Error: 184 ...
Category: SQL

Problem When you are trying to login into SQL Server with a ne ...


Views: 373 Likes: 108
.NET Framework October 2023 Security and Quality Rollup Updates
.NET Framework October 2023 Security and Quality R ...

This week we released the October 2023 Security and Quality Rollup Updates for .NET Framework. Security The October Security and Quality Rollup Update does not contain any new security fixes. See September 2023 Security and Quality Rollup for the latest security updates. Quality and Reliability This release contains the following quality and reliability improvements. ASP.NET Addresses an issue with “System.ArgumentException Illegal characters in path” in some ASP.Net MVC requests. SqlClient Addresses an issue with SQL event source telemetry. Getting the Update The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. **Note** Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for Windows 10, version 1507 and Windows Server 2016 versions and newer operating systems. Product Version Cumulative Update Windows 11, version 22H2 5031323 .NET Framework 3.5, 4.8.1 Catalog 5030651 Windows 11, version 21H2 5031225 .NET Framework 3.5, 4.8 Catalog 5030842 .NET Framework 3.5, 4.8.1 Catalog 5030650 Microsoft server operating system, version 22H2 5031605 .NET Framework 3.5, 4.8 Catalog 5030999 .NET Framework 3.5, 4.8.1 Catalog 5030998 Microsoft server operating system version 21H2 5031221 .NET Framework 3.5, 4.8 Catalog 5030999 .NET Framework 3.5, 4.8.1 Catalog 5030998 Windows 10, version 22H2 5031224 .NET Framework 3.5, 4.8 Catalog 5030841 .NET Framework 3.5, 4.8.1 Catalog 5030649 Windows 10, version 21H2 5031223 .NET Framework 3.5, 4.8 Catalog 5030841 .NET Framework 3.5, 4.8.1 Catalog 5030649 Windows 10 1809 (October 2018 Update) and Windows Server 2019 5031222 .NET Framework 3.5, 4.7.2 Catalog 5031005 .NET Framework 3.5, 4.8 Catalog 5031010 Windows 10 1607 (Anniversary Update) and Windows Server 2016 .NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 5031362 .NET Framework 4.8 Catalog 5031000 The following table is for earlier Windows and Windows Server versions. Product Version Security and Quality Rollup Windows Server 2012 5031227 .NET Framework 3.5 Catalog 5030160 .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 5031007 .NET Framework 4.8 Catalog 5031002 Windows Server 2012 R2 5031228 .NET Framework 3.5 Catalog 5029915 .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 5031008 .NET Framework 4.8 Catalog 5031003 Windows Embedded 7 and Windows Server 2008 R2 SP1 5031226 .NET Framework 3.5.1 Catalog 5029938 .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 5031006 .NET Framework 4.8 Catalog 5031001 Windows Server 2008 5031229 .NET Framework 2.0, 3.0 Catalog 5029937 .NET Framework 4.6.2 Catalog 5031006   Previous Monthly Rollups The last few .NET Framework Monthly updates are listed below for your convenience .NET Framework September 2023 Cumulative Update Preview .NET Framework September 2023 Security and Quality Rollup Updates .NET Framework August 2023 Cumulative Update Preview .NET Framework August 2023 Security and Quality Rollup Updates The post .NET Framework October 2023 Security and Quality Rollup Updates appeared first on .NET Blog.


How to create SQL file in one command on Windows
Category: SQL

Have you ever wondered how to create a sQL file on one command. I found myself needing to create ...


Views: 0 Likes: 39
[Solved]: Invalid version: 16. (Microsoft.SqlServe ...
Category: Other

Question How do you solve the error below? Invalid versi ...


Views: 0 Likes: 24
Provision Azure IoT Hub devices using DPS and X.509 certificates in ASP.NET Core
Provision Azure IoT Hub devices using DPS and X.50 ...

This article shows how to provision Azure IoT hub devices using Azure IoT hub device provisioning services (DPS) and ASP.NET Core. The devices are setup using chained certificates created using .NET Core and managed in the web application. The data is persisted in a database using EF Core and the certificates are generated using the CertificateManager Nuget package. Code https//github.com/damienbod/AzureIoTHubDps Setup To setup a new Azure IoT Hub DPS, enrollment group and devices, the web application creates a new certificate using an ECDsa private key and the .NET Core APIs. The data is stored in two pem files, one for the public certificate and one for the private key. The pem public certificate file is downloaded from the web application and uploaded to the certificates blade in Azure IoT Hub DPS. The web application persists the data to a database using EF Core and SQL. A new certificate is created from the DPS root certificate and used to create a DPS enrollment group. The certificates are chained from the original DPS certificate. New devices are registered and created using the enrollment group. Another new device certificate chained from the enrollment group certificate is created per device and used in the DPS. The Azure IoT Hub DPS creates a new IoT Hub device using the linked IoT Hubs. Once the IoT hub is running, the private key from the device certificate is used to authenticate the device and send data to the server. When the ASP.NET Core web application is started, users can create new certificates, enrollment groups and add devices to the groups. I plan to extend the web application to add devices, delete devices, and delete groups. I plan to add authorization for the different user types and better paging for the different UIs. At present all certificates use ECDsa private keys but this can easily be changed to other types. This depends on the type of root certificate used. The application is secured using Microsoft.Identity.Web and requires an authenticated user. This can be setup in the program file or in the startup extensions. I use EnableTokenAcquisitionToCallDownstreamApi to force the OpenID Connect code flow. The configuration is read from the default AzureAd app.settings and the whole application is required to be authenticated. When the enable and disable flows are added, I will add different users with different authorization levels. builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication( OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration.GetSection("AzureAd")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); Create an Azure IoT Hub DPS certificate The web application is used to create devices using certificates and DPS enrollment groups. The DpsCertificateProvider class is used to create the root self signed certificate for the DPS enrollment groups. The NewRootCertificate from the CertificateManager Nuget package is used to create the certificate using an ECDsa private key. This package wraps the default .NET APIs for creating certificates and adds a layer of abstraction. You could just use the lower level APIs directly. The certificate is exported to two separate pem files and persisted to the database. public class DpsCertificateProvider { private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ImportExportCertificate _iec; private readonly DpsDbContext _dpsDbContext; public DpsCertificateProvider(CreateCertificatesClientServerAuth ccs, ImportExportCertificate importExportCertificate, DpsDbContext dpsDbContext) { _createCertsService = ccs; _iec = importExportCertificate; _dpsDbContext = dpsDbContext; } public async Task<(string PublicPem, int Id)> CreateCertificateForDpsAsync(string certName) { var certificateDps = _createCertsService.NewRootCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 3, certName); var publicKeyPem = _iec.PemExportPublicKeyCertificate(certificateDps); string pemPrivateKey = string.Empty; using (ECDsa? ecdsa = certificateDps.GetECDsaPrivateKey()) { pemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{certName}-private.pem", pemPrivateKey); } var item = new DpsCertificate { Name = certName, PemPrivateKey = pemPrivateKey, PemPublicKey = publicKeyPem }; _dpsDbContext.DpsCertificates.Add(item); await _dpsDbContext.SaveChangesAsync(); return (publicKeyPem, item.Id); } public async Task<List<DpsCertificate>> GetDpsCertificatesAsync() { return await _dpsDbContext.DpsCertificates.ToListAsync(); } public async Task<DpsCertificate?> GetDpsCertificateAsync(int id) { return await _dpsDbContext.DpsCertificates.FirstOrDefaultAsync(item => item.Id == id); } } Once the root certificate is created, you can download the public pem file from the web application and upload it to the Azure IoT Hub DPS portal. This needs to be verified. You could also use a CA created certificate for this, if it is possible to create child chained certificates. The enrollment groups are created from this root certificate. Create an Azure IoT Hub DPS enrollment group Devices can be created in different ways in the Azure IoT Hub. We use a DPS enrollment group with certificates to create the Azure IoT devices. The DpsEnrollmentGroupProvider is used to create the enrollment group certificate. This uses the root certificate created in the previous step and chains the new group certificate from this. The enrollment group is used to add devices. Default values are defined for the enrollment group and the pem files are saved to the database. The root certificate is read from the database and the chained enrollment group certificate uses an ECDsa private key like the root self signed certificate. The CreateEnrollmentGroup method is used to set the initial values of the IoT Hub Device. The ProvisioningStatus is set to enabled. This means when the device is registered, it will be enabled to send messages. You could also set this to disabled and enable it after when the device gets used by an end client for the first time. A MAC or a serial code from the device hardware could be used to enable the IoT Hub device. By waiting till the device is started by the end client, you could choose a IoT Hub optimized for this client. public class DpsEnrollmentGroupProvider { private IConfiguration Configuration { get;set;} private readonly ILogger<DpsEnrollmentGroupProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ProvisioningServiceClient _provisioningServiceClient; public DpsEnrollmentGroupProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsEnrollmentGroupProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; _provisioningServiceClient = ProvisioningServiceClient.CreateFromConnectionString( Configuration.GetConnectionString("DpsConnection")); } public async Task<(string Name, int Id)> CreateDpsEnrollmentGroupAsync( string enrollmentGroupName, string certificatePublicPemId) { _logger.LogInformation("Starting CreateDpsEnrollmentGroupAsync..."); _logger.LogInformation("Creating a new enrollmentGroup..."); var dpsCertificate = _dpsDbContext.DpsCertificates .FirstOrDefault(t => t.Id == int.Parse(certificatePublicPemId)); var rootCertificate = X509Certificate2.CreateFromPem( dpsCertificate!.PemPublicKey, dpsCertificate.PemPrivateKey); // create an intermediate for each group var certName = $"{enrollmentGroupName}"; var certDpsGroup = _createCertsService.NewIntermediateChainedCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 2, certName, rootCertificate); // get the public key certificate for the enrollment var pemDpsGroupPublic = _iec.PemExportPublicKeyCertificate(certDpsGroup); string pemDpsGroupPrivate = string.Empty; using (ECDsa? ecdsa = certDpsGroup.GetECDsaPrivateKey()) { pemDpsGroupPrivate = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{enrollmentGroupName}-private.pem", pemDpsGroupPrivate); } Attestation attestation = X509Attestation.CreateFromRootCertificates(pemDpsGroupPublic); EnrollmentGroup enrollmentGroup = CreateEnrollmentGroup(enrollmentGroupName, attestation); _logger.LogInformation("{enrollmentGroup}", enrollmentGroup); _logger.LogInformation("Adding new enrollmentGroup..."); EnrollmentGroup enrollmentGroupResult = await _provisioningServiceClient .CreateOrUpdateEnrollmentGroupAsync(enrollmentGroup); _logger.LogInformation("EnrollmentGroup created with success."); _logger.LogInformation("{enrollmentGroupResult}", enrollmentGroupResult); DpsEnrollmentGroup newItem = await PersistData(enrollmentGroupName, dpsCertificate, pemDpsGroupPublic, pemDpsGroupPrivate); return (newItem.Name, newItem.Id); } private async Task<DpsEnrollmentGroup> PersistData(string enrollmentGroupName, DpsCertificate dpsCertificate, string pemDpsGroupPublic, string pemDpsGroupPrivate) { var newItem = new DpsEnrollmentGroup { DpsCertificateId = dpsCertificate.Id, Name = enrollmentGroupName, DpsCertificate = dpsCertificate, PemPublicKey = pemDpsGroupPublic, PemPrivateKey = pemDpsGroupPrivate }; _dpsDbContext.DpsEnrollmentGroups.Add(newItem); dpsCertificate.DpsEnrollmentGroups.Add(newItem); await _dpsDbContext.SaveChangesAsync(); return newItem; } private static EnrollmentGroup CreateEnrollmentGroup(string enrollmentGroupName, Attestation attestation) { return new EnrollmentGroup(enrollmentGroupName, attestation) { ProvisioningStatus = ProvisioningStatus.Enabled, ReprovisionPolicy = new ReprovisionPolicy { MigrateDeviceData = false, UpdateHubAssignment = true }, Capabilities = new DeviceCapabilities { IotEdge = false }, InitialTwinState = new TwinState( new TwinCollection("{ \"updatedby\"\"" + "damien" + "\", \"timeZone\"\"" + TimeZoneInfo.Local.DisplayName + "\" }"), new TwinCollection("{ }") ) }; } public async Task<List<DpsEnrollmentGroup>> GetDpsGroupsAsync(int? certificateId = null) { if (certificateId == null) { return await _dpsDbContext.DpsEnrollmentGroups.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentGroups .Where(s => s.DpsCertificateId == certificateId).ToListAsync(); } public async Task<DpsEnrollmentGroup?> GetDpsGroupAsync(int id) { return await _dpsDbContext.DpsEnrollmentGroups .FirstOrDefaultAsync(d => d.Id == id); } } Register a device in the enrollment group The DpsRegisterDeviceProvider class creates a new device chained certificate using the enrollment group certificate and creates this using the ProvisioningDeviceClient. The transport ProvisioningTransportHandlerAmqp is set in this example. There are different transport types possible and you need to chose the one which best meets your needs. The device certificate uses an ECDsa private key and stores everything to the database. The PFX for windows is stored directly to the file system. I use pem files and create the certificate from these in the device client sending data to the hub and this is platform independent. The create PFX file requires a password to use it. public class DpsRegisterDeviceProvider { private IConfiguration Configuration { get; set; } private readonly ILogger<DpsRegisterDeviceProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; public DpsRegisterDeviceProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsRegisterDeviceProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; } public async Task<(int? DeviceId, string? ErrorMessage)> RegisterDeviceAsync( string deviceCommonNameDevice, string dpsEnrollmentGroupId) { int? deviceId = null; var scopeId = Configuration["ScopeId"]; var dpsEnrollmentGroup = _dpsDbContext.DpsEnrollmentGroups .FirstOrDefault(t => t.Id == int.Parse(dpsEnrollmentGroupId)); var certDpsEnrollmentGroup = X509Certificate2.CreateFromPem( dpsEnrollmentGroup!.PemPublicKey, dpsEnrollmentGroup.PemPrivateKey); var newDevice = new DpsEnrollmentDevice { Password = GetEncodedRandomString(30), Name = deviceCommonNameDevice.ToLower(), DpsEnrollmentGroupId = dpsEnrollmentGroup.Id, DpsEnrollmentGroup = dpsEnrollmentGroup }; var certDevice = _createCertsService.NewDeviceChainedCertificate( new DistinguishedName { CommonName = $"{newDevice.Name}" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, $"{newDevice.Name}", certDpsEnrollmentGroup); var deviceInPfxBytes = _iec.ExportChainedCertificatePfx(newDevice.Password, certDevice, certDpsEnrollmentGroup); // This is required if you want PFX exports to work. newDevice.PathToPfx = FileProvider.WritePfxToDisk($"{newDevice.Name}.pfx", deviceInPfxBytes); // get the public key certificate for the device newDevice.PemPublicKey = _iec.PemExportPublicKeyCertificate(certDevice); FileProvider.WriteToDisk($"{newDevice.Name}-public.pem", newDevice.PemPublicKey); using (ECDsa? ecdsa = certDevice.GetECDsaPrivateKey()) { newDevice.PemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{newDevice.Name}-private.pem", newDevice.PemPrivateKey); } // setup Windows store deviceCert var pemExportDevice = _iec.PemExportPfxFullCertificate(certDevice, newDevice.Password); var certDeviceForCreation = _iec.PemImportCertificate(pemExportDevice, newDevice.Password); using (var security = new SecurityProviderX509Certificate(certDeviceForCreation, new X509Certificate2Collection(certDpsEnrollmentGroup))) // To optimize for size, reference only the protocols used by your application. using (var transport = new ProvisioningTransportHandlerAmqp(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerHttp()) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.WebSocketOnly)) { var client = ProvisioningDeviceClient.Create("global.azure-devices-provisioning.net", scopeId, security, transport); try { var result = await client.RegisterAsync(); _logger.LogInformation("DPS client created {result}", result); } catch (Exception ex) { _logger.LogError("DPS client created {result}", ex.Message); return (null, ex.Message); } } _dpsDbContext.DpsEnrollmentDevices.Add(newDevice); dpsEnrollmentGroup.DpsEnrollmentDevices.Add(newDevice); await _dpsDbContext.SaveChangesAsync(); deviceId = newDevice.Id; return (deviceId, null); } private static string GetEncodedRandomString(int length) { var base64 = Convert.ToBase64String(GenerateRandomBytes(length)); return base64; } private static byte[] GenerateRandomBytes(int length) { var byteArray = new byte[length]; RandomNumberGenerator.Fill(byteArray); return byteArray; } public async Task<List<DpsEnrollmentDevice>> GetDpsDevicesAsync(int? dpsEnrollmentGroupId) { if(dpsEnrollmentGroupId == null) { return await _dpsDbContext.DpsEnrollmentDevices.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentDevices.Where(s => s.DpsEnrollmentGroupId == dpsEnrollmentGroupId).ToListAsync(); } public async Task<DpsEnrollmentDevice?> GetDpsDeviceAsync(int id) { return await _dpsDbContext.DpsEnrollmentDevices .Include(device => device.DpsEnrollmentGroup) .FirstOrDefaultAsync(d => d.Id == id); } } Download certificates and use The private and the public pem files are used to setup the Azure IoT Hub device and send data from the device to the server. A HTML form is used to download the files. The form sends a post request to the file download API. <form action="/api/FileDownload/DpsDevicePublicKeyPem" method="post"> <input type="hidden" value="@Model.DpsDevice.Id" id="Id" name="Id" /> <button type="submit" style="padding-left0" class="btn btn-link">Download Public PEM</button> </form> The DpsDevicePublicKeyPemAsync method implements the file download. The method gets the data from the database and returns this as pem file. [HttpPost("DpsDevicePublicKeyPem")] public async Task<IActionResult> DpsDevicePublicKeyPemAsync([FromForm] int id) { var cert = await _dpsRegisterDeviceProvider .GetDpsDeviceAsync(id); if (cert == null) throw new ArgumentNullException(nameof(cert)); if (cert.PemPublicKey == null) throw new ArgumentNullException(nameof(cert.PemPublicKey)); return File(Encoding.UTF8.GetBytes(cert.PemPublicKey), "application/octet-stream", $"{cert.Name}-public.pem"); } The device UI displays the data and allows the authenticated user to download the files. The CertificateManager and the Microsoft.Azure.Devices.Client Nuget packages are used to implement the IoT Hub device client. The pem files with the public certificate and the private key can be loaded into a X509Certificate instance. This is then used to send the data using the DeviceAuthenticationWithX509Certificate class. The SendEvent method sends the data using the IoT Hub device Message class. var serviceProvider = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); var iec = serviceProvider.GetService<ImportExportCertificate>(); #region pem var deviceNamePem = "robot1-feed"; var certPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-public.pem"); var eccPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-private.pem"); var cert = X509Certificate2.CreateFromPem(certPem, eccPem); // setup deviceCert windows store export var pemDeviceCertPrivate = iec!.PemExportPfxFullCertificate(cert); var certDevice = iec.PemImportCertificate(pemDeviceCertPrivate); #endregion pem var auth = new DeviceAuthenticationWithX509Certificate(deviceNamePem, certDevice); var deviceClient = DeviceClient.Create(iotHubUrl, auth, transportType); if (deviceClient == null) { Console.WriteLine("Failed to create DeviceClient!"); } else { Console.WriteLine("Successfully created DeviceClient!"); SendEvent(deviceClient).Wait(); } Notes Using certificates in .NET and windows is complicated due to how the private keys are handled and loaded. The private keys need to be exported or imported into the stores etc. This is not an easy API to get working and the docs for this are confusing. This type of device transport and the default setup for the device would need to be adapted for your system. In this example, I used ECDsa certificates but you could also use RSA based keys. The root certificate could be replaced with a CA issued one. I created long living certificates because I do not want the devices to stop working in the field. This should be moved to a configuration. A certificate rotation flow would make sense as well. In the follow up articles, I plan to save the events in hot and cold path events and implement device enable, disable flows. I also plan to write about the device twins. The device twins is a excellent way of sharing data in both directions. Links https//github.com/Azure/azure-iot-sdk-csharp https//github.com/damienbod/AspNetCoreCertificates Creating Certificates for X.509 security in Azure IoT Hub using .NET Core https//learn.microsoft.com/en-us/azure/iot-hub/troubleshoot-error-codes https//stackoverflow.com/questions/52750160/what-is-the-rationale-for-all-the-different-x509keystorageflags/52840537#52840537 https//github.com/dotnet/runtime/issues/19581 https//www.nuget.org/packages/CertificateManager Azure IoT Hub Documentation | Microsoft Learn


Solved!! Column 'Employee.ID' is invalid in the se ...
Category: Entity Framework

As a blogger, I would write a blog post about the error message "Column 'Employee.ID' is invalid in ...


Views: 462 Likes: 124
Error 0xc0202009: Data Flow Task 1: SSIS Error Cod ...
Category: Servers

Question I came about this SQL Server ...


Views: 0 Likes: 44
equivalent of bit data type in C#
Category: .Net 7

Question How do you define a bit T-SQL data type in C#? What is the equivalent ...


Views: 8 Likes: 67
How to Connect to PostgreSQL using ODBC 32bit Dsn ...
Category: SQL

Question How do I connect to <a class='text-decoration-none' href='https//www ...


Views: 0 Likes: 37
Add Link Server to Sql sever
Category: SQL

<a href="https//docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-addli ...


Views: 299 Likes: 102
Understanding Sequences in Transact-SQL (SQL Study ...
Category: SQL

SQL Study Materi ...


Views: 478 Likes: 89
Access SQL Server Management Remotely
Category: SQL

If you want to access SQL&nbsp;server management remotely, on Windows, search for Firewall on you ...


Views: 345 Likes: 118
How to Scale SQL Server Google Bard vs ChatGPT Res ...
Category: Servers

Google Bard There are two ways to scale SQL Server scaling up and scaling ...


Views: 0 Likes: 28
Linked Server and SSIS
Category: Servers

<a href="https//docs.microsoft.com/en-us/sql/relational-databases/linked-servers/create-linked-serv ...


Views: 347 Likes: 111
MySQL Backup and Restore Commands for DBA
Category: SQL

<span style="font-weight bold; font-size large; textline underli ...


Views: 354 Likes: 102
Speed up JOINS in SQL
Category: SQL

If you want to speed up JOINS include a WHERE clause after the joins, else, joins take too long to e ...


Views: 301 Likes: 111
Error 0xc02020a1: Data Flow Task 1: Data conversio ...
Category: SQL

Problem Error 0xc02020a1 Data Flow Task 1 Data conversion failed. The data conversion ...


Views: 1523 Likes: 100
How to Change the Time Zone of SQL Server from Doc ...
Category: Docker-Compose

Question How do you change the time from UTC to <a class='text-decoration-none' href='https//w ...


Views: 0 Likes: 33
Updated First Responder Kit and Consultant Toolkit for June 2023
Updated First Responder Kit and Consultant Toolkit ...

This one’s a pretty quiet release just bug fixes in sp_Blitz, sp_BlitzLock, and sp_DatabaseRestore. Wanna watch me use it? Take the class. To get the new version Download the updated FirstResponderKit.zip Azure Data Studio users with the First Responder Kit extension ctrl/command+shift+p, First Responder Kit Import. PowerShell users run Install-DbaFirstResponderKit from dbatools Get The Consultant Toolkit to quickly export the First Responder Kit results into an easy-to-share spreadsheet Consultant Toolkit Changes I updated it to this month’s First Responder Kit, but no changes to querymanifest.json or the spreadsheet. If you’ve customized those, no changes are necessary this month just copy your spreadsheet and querymanifest.json into the new release’s folder. sp_Blitz Changes Fix update unsupported SQL Server versions list. Time marches on, SQL Server 2016 SP2. (#3274, thanks Michel Zehnder and sm8680.) Fix if you ran sp_Blitz in databases other than master, we weren’t showing the alerts on TDE certificates that haven’t been backed up recently. (#3278, thanks ghauan.) sp_BlitzLock Changes Enhancement compatibility with Azure Managed Instances. (#3279, thanks Erik Darling.) Fix convert existing output tables to larger data types. (#3277, thanks Erik Darling.) Fix don’t send output to client when writing it to table. (#3276, thanks Erik Darling.) sp_DatabaseRestore Changes Improvement new @FixOrphanUsers parameter. When 1, once restore is complete, sets database_principals.principal_id to the value of server_principals.principal_id where database_principals.name = server_principals.name. (#3267, thanks Rebecca Lewis.) Fix better handling of last log files for split backups when using @StopAt. (#3269, thanks Rebecca Lewis.) Fix corrected regression introduced in 8.11 that caused non-striped backups to no longer be deleted. (#3262, thanks Steve the DBA.) For Support When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here. When you find a bug or want something changed, read the contributing.md file. When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


[Solved] How to copy data from SQL Management Data ...
Category: SQL

So, I struggled to copy data from SQL Managment dataset to ...


Views: 273 Likes: 89
DDL Triggers are AFTER triggers
DDL Triggers are AFTER triggers

In this post I would like to look into DDL triggers and explain their functionality. Let me start with a short introduction on triggers All SQL Server developers use SQL triggers that basically is a mechanism that is invoked when a particular action occurs on a particular table. Triggers consist of A name An action The execution The maximum size of a trigger name is 128 characters. The action of a trigger can be either a DML statement (INSERT, UPDATE, or DELETE) or a DDL statement (CREATE, ALTER, DROP) Therefore, there are two trigger forms DML triggers and DDL triggers.   The AFTER and INSTEAD OF options are two additional options that you can define for a trigger. AFTER triggers fire after the triggering action occurs. INSTEAD OF triggers are executed instead of the corresponding triggering action. AFTER triggers can be created only on tables, while INSTEAD OF triggers can be created on both tables and views DDL triggers were introduced in SQL Server 2005. DDL triggers are not INSTEAD OF triggers. They are implemented as AFTER triggers, which means the operation occurs and is then caught in the trigger. The the operation can be  optionally rolled-back, if you put a ROLLBACK statement in the trigger body. This means they’re not quite as lightweight as you might think. Imagine doing the following ALTER TABLE MyTable ADD newcolumn VARCHAR (30) DEFAULT ‘default value’; If we have a DDL trigger defined for ALTER_TABLE events, or DDL_TABLE_EVENTS, the trigger due to the above T-SQL batch will fire and every row in the table will be expanded to include the new column (as it has a non-null default), and the operation is rolled back by your trigger body. Type (copy paste) the following T-SQL statements in a new query window in the SSMS CREATE DATABASE sampleDB; GO USE sampleDB; GO CREATE TABLE Product ( Pid INT PRIMARY KEY IDENTITY , pname NVARCHAR(50) , price DECIMAL(18, 4) ); CREATE TRIGGER Not_Alter_Tables ON DATABASE FOR ALTER_TABLE AS PRINT 'Not Alter Statements are permitted on the tables of this DB' SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'nvarchar(max)') RAISERROR ('Tables cannot be altered in this database.', 16, 1) ROLLBACK; GO ALTER TABLE dbo.Product ADD newcolumn VARCHAR (30) DEFAULT 'default value'; Let me explain what I do in the code above. I create a sample database. I create sample empty table. I create a trigger “Not_Alter_tables” on this database. This trigger applies to all the tables in the database. This trigger is a DDL trigger, hence an AFTER trigger. In the trigger I capture the T-SQL statement that invoked the trigger, I print a statement, raise an error and then rollback the operation. Then I attempt to add a new column in the table. In this example, when the trigger is invoked, the table will be expanded to include the new column (as it has a non-null default), and then the operation is rolled in the trigger body. Have a look at the picture below.   In a nutshell DDL triggers are AFTER triggers and can be quite expensive. In the example above the best way is not to use DDL triggers but to use instead explicit permissions e.g REVOKE/DENY to prevent users from altering the schema of the table. Hope it helps!!!


SqlException: A connection was successfully establ ...
Category: SQL

Question How do you resolve, SqlException A connection was successfully establ ...


Views: 341 Likes: 93
15 Recruiters Reveal If Data Science Certificates Are Worth It
15 Recruiters Reveal If Data Science Certificates ...

"Are data data science certificates worth it?" To answer this question properly, we interviewed 15 hiring managers in the data science field. This article will explain what certifications really mean to hiring managers and compare the best data science certifications available right now.  Bonus content We’ll also reveal the best-kept secrets among recruiters, including what they pay most attention to when weeding out resumes. How Data Science Certificates Impact Your Job Search We talked to more than a dozen hiring managers and recruiters in the data science field about what they wanted to see on applicants’ résumés.  None of them mentioned certificates. Not one. Here’s what we learned certificates certainly won’t hurt your job search as long as they’re presented correctly. But they’re unlikely to help much either, at least on their own. Why Data Science Certificates Fall Short You might be wondering why these certificates aren’t worth the paper they’re printed on.  The issue is that there’s no universal standard and no universally accredited certification authority. Different websites, schools, and online learning platforms all issue their own certificates. That means these documents could mean anything–or they could mean nothing at all!  This is why employers tend not to give them more than a passing glance when qualifying candidates. What’s the Point of a Certification Then? If certifications won’t help you get a job in data science, then what’s the point of earning one?  When it comes down to it, data scientist certifications aren’t completely useless. At Dataquest, we issue certificates when users complete any of our interactive data science courses. Why? Because it’s a great way for students to demonstrate that they’re actively engaged in learning new skills.  Recruiters do like to see that applicants are constantly trying to improve themselves. Listing data science certificates can help your job application in that way. What’s Better than a Data Science Certificate? What’s most important to recruiters is whether you can actually do the job. And certificates aren’t proof of real skills.  The best way to demonstrate your skills is by completing projects and adding them to a portfolio. Portfolios are like the holy grail of data science skills. That’s why hiring managers look at them first. Depending on what they see in your portfolio, they’ll either discard your application or send it to the next round of the hiring process.  Most of Dataquest’s courses contain guided projects you’ll complete to help you build your portfolio. Here are just a few of them Prison Break — Have some fun using Python and Jupyter Notebook to analyze a dataset of helicopter prison escapes. Exploring Hacker News Posts — Work with a dataset of submissions to Hacker News, a popular technology site. Exploring eBay Car Sales Data — Use Python to work with a scraped dataset of used cars from eBay Kleinanzeigen, a classifieds section of the German eBay website. You can sign up for free! Check out our courses here. When considering which certification to get, don’t focus on “which data science certificate is best.” Instead, find the platform that best helps you learn the fundamental data science skills. That’s what’s going to help you land a job in the field. How to Choose a Data Science Certificate Program in 5 Steps Finding a data science program that offers a certificate is easy. A quick Google search will turn up dozens. The hard part is deciding whether the certificate is worth your time and money. Let’s simplify this process. Here are five key things to consider when looking at a data science certification Content Cost Prerequisites or qualifications Time commitment Student reviews Remember, data science certificates are not worth the paper they’re printed on unless they teach you the skills employers are looking for. So that first bullet point is the most important. Think content, content, content!  Now, let’s look at some real-life examples to compare. Top Data Science Certifications 1. Dataquest What you’ll learn Dataquest offers five different career paths that cover the required skills to become a data analyst, business analyst, data scientist, and/or data engineer. The specific skills covered vary depending on which path you choose.  Topics include Python and R programming SQL and PostgreSQL Probability and statistics Machine learning Workflow skills like Git, the command line (bash/shell) And more Cost An annual Premium subscription of $399. Monthly subscriptions are also available. Prerequisites None. There is no application process (anyone can sign up and start learning today). No prior knowledge of applied statistics or programming is required. Time commitment Varies. Dataquest is a self-serve, interactive learning platform. Most learners find they’re able to meet their learning goals in six months, if studying fewer than ten hours per week. Learning goals can be accelerated with larger time commitments.  Reviews 4.85/5 average on Switchup (301 reviews) 4.76/5 on Course Report (19 reviews) 4.7/5 on G2 (46 reviews) 2. Cloudera University Data Analyst Course/Exam What you’ll learn This course focuses on data analysis using Apache products Hadoop, Hive, and Impala. It covers some SQL, but does not address Python or R programming. Cost The on-demand version costs $2,235 (180 days of access). Certification exams have an additional cost. Prerequisites Some prior knowledge of SQL and Linux command line is required. Time commitment Varies. Because this is a self-paced course, users have access for 180 days to complete 15 sections. Each section is estimated to take between 5-9 hours. The time commitment is between 75 and 135 hours. If you commit less than an hour each day, it might take you the entire 180 days. If you can devote 9 or more hours per day, it might take you a couple of weeks to complete. Reviews Third-party reviews for this program are difficult to find. 3. IBM Data Science Professional Certificate What you’ll learn This Coursera-based program covers Python and SQL. This includes some machine learning skills with Python. Cost A Coursera subscription, which is required. Based on Coursera’s 10-month completion estimate, the approximate total program cost is $390. A similar program is also available on EdX. Prerequisites None.  Time commitment Varies. Coursera suggests that the average time to complete this certificate is ten months. Reviews Quantitative third-party reviews are difficult to find. 4.6/5 average on Coursera’s own site (57,501 ratings) 4. Harvard/EdX Professional Certificate in Data Science What you’ll learn This EdX-based program covers R, some machine learning skills, and some statistics and workflow skills. It does not appear to include SQL. Cost $792.80 Prerequisites None.  Time commitment One year and five months. Course progress doesn’t carry over from session to session, so it could require more time if you’re not able to complete a course within its course run. Reviews Quantitative third-party reviews are difficult to find. 4.6/5 average on Class Central (11 reviews) 5. Certified Analytics Professional What you’ll learn Potentially nothing–this is simply a certification exam. However, test prep courses are available. Cost The certification test costs $695 and includes limited prep materials. Dedicated prep courses are available for an additional cost. Prerequisites An application is required to take the certification exam. Since no course is included, you’ll need to learn the required information on your own or sign up for a course separately. Time commitment The exam itself is relatively short. The dedicated prep courses take 1-2 months, depending on options. They are not required for taking the exam. Reviews Quantitative third-party reviews are difficult to find. Here are some independent opinions about the certification Reddit thread about CAP Quora thread about CAP 6. From Data to Insights with Google Cloud What you’ll learn This course covers SQL data analysis skills with a focus on using BigQuery and Google Cloud’s data analysis tool. Cost A Coursera subscription, which is required, costs $39/month. Coursera estimates that most students will need two months to complete the program. Prerequisites The course page says “We recommend participants have some proficiency with ANSI SQL.” It’s not clear what level of SQL proficiency is required. Time commitment Coursera estimates that most students will need two months to complete the program, but students can work at their own pace. However, courses do begin on prescribed dates. Reviews Quantitative third-party reviews are difficult to find, but 4.7/5 rating on Coursera itself (3,712 ratings) Insider Tip Beware of Prerequisites and Qualifications! Before you start looking for data science courses and certifications, there’s something you need to be aware of.  While some programs like Dataquest, Coursera, and Udemy do not require any particular background or industry knowledge, many others do have concrete prerequisites. For example, DASCA’s Senior Data Scientist Certification tracks require at least a Bachelor’s degree (some tracks require a Master’s degree). That’s in addition to a minimum of 3-5 years of professional data-related experience! Some programs, particularly offline bootcamps, also require specific qualifications or have extensive application processes. Translation? You won’t be able to jump in right away and begin learning. You’ll need to factor in time costs and application fees for these programs when making your choice. Best-Kept Secret The Myth of University Certificates in Data Science If you’re considering a data science certificate from a university, think again.  Many of the expensive certification programs offered online by brand-name schools (and even Ivy-League schools) are not very meaningful to potential employers.  A number of these programs are not even administered by the schools themselves. Instead, they’re run by for-profit, third-party firms called “Online Program Managers”.  What’s worse is that data science recruiters know this. Yes, employers are keenly aware that a Harvard-affiliated certificate from EdX and a Harvard University degree are two very different things. Plus, most data science hiring managers will not have time to research every data science certification they see on a résumé. Most résumés are only given about 30 seconds of review time. So even if your university-based certificate is actually worth something, recruiters likely won’t notice it.  The Sticker Shock of University Certificates University certificates tend to be expensive. Consider the cost of some of the most popular options out there Cornell’s three-week data analytics certificate – $3,600 Duke’s big data and data science certificate – $3,195 Georgetown’s professional certificate in data science – $7,496 UC Berkeley’s data scientist certification program – $5,100 Harvard’s data science certificate – $11,600 How to Get the Data Science Skills Employers Desire We’ve established that recruiters and hiring managers in data science are looking for real-world skills, not necessarily certifications. So what’s the best way to get the skills you need? Hands-down, the best way to acquire compelling data science skills is by digging in and getting your hands dirty with actual data.  Choose a data science course that lets you complete projects as you learn. Then, showcase your know-how with digital portfolios. That way, employers can see what skills you’ve mastered when considering your application.  At Dataquest, our courses are interactive and project-based. They’re designed so that students can immediately apply their learning and document their new skills to get the attention of recruiters. Sign up for free today, and launch your career in the growing field of data science!


SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity? &nbsp; If not int ...


Views: 0 Likes: 73
SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity? &nbsp; If not int ...


Views: 0 Likes: 64
Find Out Which Database Contain A Table
Category: Databases

Sometimes it is very important to query the database meta-data in order to be fast and efficient. SQ ...


Views: 318 Likes: 85
15 Recruiters Reveal If SQL Certifications Are Worth It
15 Recruiters Reveal If SQL Certifications Are Wor ...

Will getting a SQL certification actually help you get a data job?  There's a lot of conflicting answers out there, but we're here to clear the air. In this article, we’ll dispel some of the myths regarding SQL certifications, shed light on how hiring managers view these certificates, and back up our claims with actual data. Do you need a SQL certification for a job? It depends. Learning SQL is crucial if you want to get a job in data. But do you need an actual certificate that proves this knowledge? It depends on your desired role in data science.  When You DON'T Need a Certificate Are you planning to work as a data analyst, data engineer, statistician, or data scientist?  Then, the answer is No, you do not need a SQL certificate. You most certainly need SQL skills for these jobs, but a certification won’t be required. In fact, it probably won’t even help. Here’s why. . . What Hiring Managers Have to Say About SQL Certificates I recently interviewed data science hiring managers, recruiters, and other professionals for a data science career guide. I asked them about the skills and qualifications they wanted to see in good job candidates for data science roles. Throughout my 200 pages of interview transcripts, the term “SQL” is mentioned a lot. It’s clearly a skill that most hiring managers want to see. But the terms “certification” and “certificate”? Those words don’t appear in the transcripts at all.  Not a single person I spoke to thought certificates were important enough to even mention. In other words, the people who hire data analysts and data scientists typically don’t care about certifications. Having a SQL certificate on your resume isn’t likely to impact their decision one way or the other. Why Aren’t Data Science Recruiters Interested in Certificates? Certificates in the industry are widely available and heavily promoted. But most data science employers aren’t impressed with them. Why not?  The short answer is that there’s no “standard” certification for SQL. Plus, there are so many different online and offline SQL certification options that employers struggle to determine whether these credentials actually mean anything. Rather than relying on a single piece of paper that may or may not equate to actual skills, it’s easier for employers to simply look at an applicant’s project portfolio. Tangible proof of real-world experience in the industry is a more reliable representation of SQL skills compared to a certification.  That’s why Dataquest has students complete comprehensive projects after each interactive SQL course. This creates a skills showcase you can present to employers during the job hunt. You can start for free, and you’ll be writing real code within minutes of signing up. The Exception For most roles in data science, a SQL certificate isn’t necessary. But there are exceptions to this rule.  For example, if you want to work in database administration as opposed to data science, a certificate might be required. Likewise, if you’re looking at a very specific company or industry, getting SQL certified could be helpful.   There are many “flavors” of SQL tied to different database systems and tools. So, there may be official certifications associated with the specific type of SQL a company uses that are valuable, or even mandatory. For example, if you’re applying for a database job at a company that uses Microsoft’s SQL Server, earning one of Microsoft’s Azure Database Administrator certificates could be helpful. If you’re applying for a job at a company that uses Oracle, getting an Oracle Database SQL certification may be required. Most Data Science Jobs Don’t Require Certification Let’s be clear, though. For the vast majority of data science roles, specific certifications are not usually required. The different variations of SQL rarely differ too much from “base” SQL. Thus, most employers won’t be concerned about whether you’ve mastered a particular brand’s proprietary tweaks. As a general rule, recruiters just want to see proof that you’ve got the fundamental SQL skills to access and filter the data you need. Certifications don’t really prove that you have a particular skill, so the best way to demonstrate your SQL knowledge on a job application is to include projects that show off your SQL mastery. Is a SQL Certification Worth it for Data Science? It depends. Ask yourself Is the certification program teaching you valuable skills or just giving you a bullet point for your LinkedIn? The former can be worth it. The latter? Not so much.  The price of the certification is also an important consideration. Not many people have thousands to spend on a SQL certification. Even if you do, though, there’s no good reason to pay that much. You can learn SQL interactively and get certified for a much lower price on platforms like Dataquest. What SQL Certificate Is Best? As mentioned above, there’s a good chance you don’t need a SQL certificate. But if you do feel you need one, or you’d just like to have one, here are some of the best SQL certifications available Dataquest’s SQL Courses These are great options for learning SQL for data science and data analysis. They’ll take you hands-on with real SQL databases and show you how to write queries to pull, filter, and analyze the data you need. All of our SQL courses offer certifications that you can add to your LinkedIn after you’ve completed them. They also include guided projects that you can complete and add to your GitHub and resume! MTA Database Fundamentals This is a Microsoft certification that covers some of the fundamentals of SQL for database administration. It is focused on Microsoft’s SQL Server product, but many of the skills it covers will be relevant to other SQL-based relational database systems. Microsoft’s Azure Database Administrator Certificate This is a great option if you’re applying to database administrator jobs at companies that use Microsoft SQL Server. The Azure certification is the newest and most relevant certification related to Microsoft SQL Server. Oracle Database SQL Certification This could be a good certification for anyone who’s interested in database jobs at companies that use Oracle. Koenig SQL Certifications Koenig offers a variety of SQL-related certification programs, although they tend to be quite pricey (over US $1,000 for most programs). Most of these certifications are specific to particular database technologies (think Microsoft SQL Server) rather than being aimed at building general SQL knowledge. Thus, they’re best for those who know they’ll need training in a specific type of database for a job as a database administrator. Are university, edX, or Coursera certifications in SQL too good to be true? Unfortunately, yes.  Interested in a more general SQL certifications? You could get certified through a university-affiliated program. These certification programs are available either online or in-person. For example, there’s a Stanford program at EdX. And programs affiliated with UC Davis and the University of Michigan can be found at Coursera. These programs appear to offer some of the prestige of a university degree without the expense or the time commitment. Unfortunately, hiring managers don’t usually see them that way. This is Stanford University. Unfortunately, getting a Stanford certificate from EdX will not trick employers into thinking you went here. Why Employers Aren’t Impressed with SQL Certificates from Universities Employers know that a Stanford certificate and a Stanford degree are very different things. Even if these certificate programs use video lectures from real courses, they rarely include rigorous testing or project-based assessments.  The Flawed University Formula for Teaching SQL Most online university certificate programs follow a basic formula Watch video lectures to learn the material. Take multiple-choice or fill-in-the-blank quizzes to test your knowledge. If you complete any kind of hands-on project, it is ungraded, or graded by other learners in your cohort. This format is immensely popular because it is the best way for universities to monetize their course material. All they have to do is record some lectures, write a few quizzes, and then hundreds of thousands of students can move through the courses with no additional effort or expense required.  It’s easy and profitable. That doesn’t mean it’s necessarily effective, though, and employers know it.  With many of these certification providers, it’s possible to complete an online programming certification without ever having written or run a line of code! So you can see why a certification like this doesn’t hold much weight with recruiters. How Can I Learn the SQL Skills Employers Want? Getting hands-on experience with writing and running SQL queries is imperative, though. So is working with real data. The best way to learn these critical professional tasks is by doing them, not by watching a professor talk about them. That’s why at Dataquest, we have an interactive online platform that lets you write and run real SQL queries on real data right from your browser window. As you’re learning new SQL concepts, you’ll be immediately applying them in a real-world setting. This is hands-down the best way to learn SQL. After each course, you’ll be asked to synthesize your new learning into a longer-form guided project. This is something that you can customize and put on your resume and GitHub once you’re finished. We’ll give you a certificate, too, but that probably won’t be the most valuable takeaway. Of course, the best way to determine if something is worth it is always to try it for yourself. At Dataquest, you can sign up for a free account and dive right into learning SQL. This is how we teach SQL at Dataquest


Get Rid Of Black Blinking Cursor MSSMS
Category: Databases

Question How do you remove a Bl ...


Views: 2699 Likes: 111
How to Use 2 CTE in a Single SQL Query - Use Temp ...
Category: SQL

Question I found myself in need to use two CTE in one single SQL Query, however ...


Views: 0 Likes: 38
A Data CEO’s Guide to Becoming a Data Scientist From Scratch
A Data CEO’s Guide to Becoming a Data Scientist Fr ...

If you want to know how to become a data scientist, then you’re in the right place. I’ve been where you are, and now I want to help. A decade ago, I was just a college graduate with a history degree. I then became a machine learning engineer, data science consultant, and now CEO of Dataquest. If I could do everything over, I would follow the steps I’m going to share with you in this article. It would have fast-tracked my career, saved me thousands of hours, and prevented a few gray hairs. The Wrong and Right Way  When I was learning, I tried to follow various online data science guides, but I ended up bored and without any actual data science skills to show for my time.  The guides were like a teacher at school handing me a bunch of books and telling me to read them all — a learning approach that never appealed to me. It was frustrating and self-defeating. Over time, I realized that I learn most effectively when I'm working on a problem I'm interested in.  And then it clicked. Instead of learning a checklist of data science skills, I decided to focus on building projects around real data. Not only did this learning method motivate me, it also mirrored the work I’d do in an actual data scientist role. I created this guide to help aspiring data scientists who are in the same position I was in. In fact, that’s also why I created Dataquest. Our data science courses are designed to take you from beginner to job-ready in less than 8 months using actual code and real-world projects. However, a series of courses isn’t enough. You need to know how to think, study, plan, and execute effectively if you want to become a data scientist. This actionable guide contains everything you need to know. How to Become a Data Scientist Step 1 Question Everything Step 2 Learn The Basics Step 3 Build Projects Step 4 Share Your Work Step 5 Learn From Others Step 6 Push Your Boundaries Now, let’s go over each of these one by one. Step 1 Question Everything The data science and data analytics field is appealing because you get to answer interesting questions using actual data and code. These questions can range from Can I predict whether a flight will be on time? to How much does the U.S. spend per student on education?  To answer these questions, you need to develop an analytical mindset. The best way to develop this mindset is to start with analyzing news articles. First, find a news article that discusses data. Here are two great examples Can Running Make You Smarter? or Is Sugar Really Bad for You?.  Then, think about the following How they reach their conclusions given the data they discuss How you might design a study to investigate further What questions you might want to ask if you had access to the underlying data Some articles, like this one on gun deaths in the U.S. and this one on online communities supporting Donald Trump actually have the underlying data available for download. This allows you to explore even deeper. You could do the following Download the data, and open it in Excel or an equivalent tool See what patterns you can find in the data by eyeballing it Do you think the data supports the conclusions of the article? Why or why not? What additional questions do you think you can use the data to answer? Here are some good places to find data-driven articles FiveThirtyEight New York Times Vox The Intercept Reflect After a few weeks of reading articles, reflect on whether you enjoyed coming up with questions and answering them. Becoming a data scientist is a long road, and you need to be very passionate about the field to make it all the way.  Data scientists constantly come up with questions and answer them using mathematical models and data analysis tools, so this step is great for understanding whether you'll actually like the work. If You Lack Interest, Analyze Things You Enjoy Perhaps you don't enjoy the process of coming up with questions in the abstract, but maybe you enjoy analyzing health or finance data. Find what you're passionate about, and then start viewing that passion with an analytical mindset. Personally, I was very interested in stock market data, which motivated me to build a model to predict the market. If you want to put in the months of hard work necessary to learn data science, working on something you’re passionate about will help you stay motivated when you face setbacks. Step 2 Learn The Basics Once you've figured out how to ask the right questions, you're ready to start learning the technical skills necessary to answer them. I recommend learning data science by studying the basics of programming in Python. Python is a programming language that has consistent syntax and is often recommended for beginners. It’s also versatile enough for extremely complex data science and machine learning-related work, such as deep learning or artificial intelligence using big data. Many people worry about which programming language to choose, but here are the key points to remember Data science is about answering questions and driving business value, not about tools Learning the concepts is more important than learning the syntax Building projects and sharing them is what you'll do in an actual data science role, and learning this way will give you a head start Super important note The goal isn’t to learn everything; it’s to learn just enough to start building projects.  Where You Should Learn Here are a few great places to learn Dataquest — I started Dataquest to make learning Python for data science or data analysis easier, faster, and more fun. We offer basic Python fundamentals courses, all the way to an all-in-one path consisting of all courses you need to become a data scientist.  Learn Python the Hard Way — a book that teaches Python concepts from the basics to more in-depth programs. The Python Tutorial — a free tutorial provided by the main Python site. The key is to learn the basics and start answering some of the questions you came up with over the past few weeks browsing articles. Step 3 Build Projects As you're learning the basics of coding, you should start building projects that answer interesting questions that will showcase your data science skills.  The projects you build don't have to be complex. For example, you could analyze Super Bowl winners to find patterns.  The key is to find interesting datasets, ask questions about the data, then answer those questions with code. If you need help finding datasets, check out this post for a good list of places to find them. As you're building projects, remember that Most data science work is data cleaning. The most common machine learning technique is linear regression. Everyone starts somewhere. Even if you feel like what you're doing isn't impressive, it's still worth working on. Where to Find Project Ideas Not only does building projects help you practice your skills and understand real data science work, it also helps you build a portfolio to show potential employers.  Here are some more detailed guides on building projects on your own Storytelling with data Machine learning project Additionally, most of Dataquest’s courses contain interactive projects that you can complete while you’re learning. Here are just a few examples Prison Break — Have some fun, and analyze a dataset of helicopter prison escapes using Python and Jupyter Notebook. Exploring Hacker News Posts — Work with a dataset of submissions to Hacker News, a popular technology site. Exploring eBay Car Sales Data — Use Python to work with a scraped dataset of used cars from eBay Kleinanzeigen, a classifieds section of the German eBay website. Star Wars Survey — Work with Jupyter Notebook to analyze data on the Star Wars movies. Analyzing NYC High School Data — Discover the SAT performance of different demographics using scatter plots and maps. Predicting the Weather Using Machine Learning — Learn how to prepare data for machine learning, work with time series data, measure error, and improve your model performance. Add Project Complexity After building a few small projects, it's time to kick it up a notch! We need to add layers of project complexity to learn more advanced topics. At this step, however, it's crucial to execute this in an area you're interested in. My interest was the stock market, so all my advanced projects had to do with predictive modeling. As your skills grow, you can make the problem more complex by adding nuances like minute-by-minute prices and more accurate predictions. Check out this article on Python projects for more inspiration. Step 4 Share Your Work Once you've built a few data science projects, share them with others on GitHub! Here’s why It makes you think about how to best present your projects, which is what you'd do in a data science role. They allow your peers to view your projects and provide feedback. They allow employers to view your projects. Helpful resources about project portfolios How To Present Your Data Science Portfolio on GitHub Data Science Portfolios That Will Get You the Job Start a Simple Blog Along with uploading your work to GitHub, you should also think about publishing a blog. When I was learning data science, writing blog posts helped me do the following Capture interest from recruiters Learn concepts more thoroughly (the process of teaching really helps you learn) Connect with peers Here are some good topics for blog posts Explaining data science and programming concepts Discussing your projects and walking through your findings Discussing how you’re learning data science Here’s an example of a visualization I made on my blog many years ago that shows how much each Simpsons character likes the others Step 5 Learn From Others After you've started to build an online presence, it's a good idea to start engaging with other data scientists. You can do this in-person or in online communities. Here are some good online communities /r/datascience Data Science Slack Quora Kaggle Here at Dataquest, we have an online community that learners can use to receive feedback on projects, discuss tough data-related problems, and build relationships with data professionals. Personally, I was very active on Quora and Kaggle when I was learning, which helped me immensely. Engaging in online communities is a good way to do the following Find other people to learn with Enhance your profile and find opportunities Strengthen your knowledge by learning from others You can also engage with people in-person through Meetups. In-person engagement can help you meet and learn from more experienced data scientists in your area. Step 6 Push Your Boundaries What kind of data scientists to companies want to hire? The ones that find critical insights that save them money or make their customers happier. You have to apply the same process to learning — keep searching for new questions to answer, and keep answering harder and more complex questions.  If you look back on your projects from a month or two ago, and you don’t see room for improvement, you probably aren't pushing your boundaries enough. You should be making strong progress every month, and your work should reflect that. Here are some ways to push your boundaries and learn data science faster Try working with a larger dataset  Start a data science project that requires knowledge you don't have Try making your project run faster Teach what you did in a project to someone else You’ve Got This! Studying to become a data scientist or data engineer isn't easy, but the key is to stay motivated and enjoy what you're doing. If you're consistently building projects and sharing them, you'll build your expertise and get the data scientist job that you want. I haven't given you an exact roadmap to learning data science, but if you follow this process, you'll get farther than you imagined you could. Anyone can become a data scientist if you're motivated enough. After years of being frustrated with how conventional sites taught data science, I created Dataquest, a better way to learn data science online. Dataquest solves the problems of MOOCs, where you never know what course to take next, and you're never motivated by what you're learning. Dataquest leverages the lessons I've learned from helping thousands of people learn data science, and it focuses on making the learning experience engaging. At Dataquest, you'll build dozens of projects, and you’ll learn all the skills you need to be a successful data scientist. Dataquest students have been hired at companies like Accenture and SpaceX . Good luck becoming a data scientist! Becoming a Data Scientist — FAQs What are the data scientist qualifications? Data scientists need to have a strong command of the relevant technical skills, which will include programming in Python or R, writing queries in SQL, building and optimizing machine learning models, and often some "workflow" skills like Git and the command line. Data scientists also need strong problem-solving, data visualization, and communication skills. Whereas a data analyst will often be given a question to answer, a data scientist is expected to explore the data and find relevant questions and business opportunities that others may have missed. While it is possible to find work as a data scientist with no prior experience, it's not a common path. Normally, people will work as a data analyst or data engineer before transitioning into a data scientist role. What are the education requirements for a data scientist? Most data scientist roles will require at least a Bachelor's degree. Degrees in technical fields like computer science and statistics may be preferred, as well as advanced degrees like Ph.D.s and Master’s degrees. However, advanced degrees are generally not strictly required (even when it says they are in the job posting). What employers are concerned about most is your skill-set. Applicants with less advanced or less technically relevant degrees can offset this disadvantage with a great project portfolio that demonstrates their advanced skills and experience doing relevant data science work. What skills are needed to become a data scientist? Specific requirements can vary quite a bit from job to job, and as the industry matures, more specialized roles will emerge. In general, though, the following skills are necessary for virtually any data science role Programming in Python or R SQL Probability and statistics Building and optimizing machine learning models Data visualization Communication Big data Data mining Data analysis Every data scientist will need to know the basics, but one role might require some more in-depth experience with Natural Language Processing (NLP), whereas another might need you to build production-ready predictive algorithms. Is it hard to become a data scientist? Yes — you should expect to face challenges on your journey to becoming a data scientist. This role requires fairly advanced programming skills and statistical knowledge, in addition to strong communication skills. Anyone can learn these skills, but you'll need motivation to push yourself through the tough moments. Choosing the right platform and approach to learning can also help make the process easier. How long does it take to become a data scientist? The length of time it takes to become a data scientist varies from person to person. At Dataquest, most of our students report reaching their learning goals in one year or less. How long the learning process takes you will depend on how much time you're able to dedicate to it. Similarly, the job search process can vary in length depending on the projects you've built, your other qualifications, your professional background, and more. Is data science a good career choice? Yes — a data science career is a fantastic choice. Demand for data scientists is high, and the world is generating a massive (and increasing) amount of data every day.  We don't claim to have a crystal ball or know what the future holds, but data science is a fast-growing field with high demand and lucrative salaries. What is the data scientist career path? The typical data scientist career path usually begins with other data careers, such as data analysts or data engineers. Then it moves into other data science roles via internal promotion or job changes. From there, more experienced data scientists can look for senior data scientist roles. Experienced data scientists with management skills can move into director of data science and similar director and executive-level roles. What salaries do data scientists make? Salaries vary widely based on location and the experience level of the applicant. On average, however, data scientists make very comfortable salaries. In 2022, the average data scientist salary is more than $120,000 USD per year in the US. And other data science roles also command high salaries Data analyst $96,707 Data engineer $131,444 Data architect $135,096 Business analyst $97,224 Which certification is best for data science? Many assume that a data science certification or completion of a data science bootcamp is something that hiring managers are looking for in qualified candidates, but this isn’t true. Hiring managers are looking for a demonstration of the skills required for the job. And unfortunately, a data analytics or data science certificate isn’t the best showcase of your skills.  The reason for this is simple.  There are dozens of bootcamps and data science certification programs out there. Many places offer them — from startups to universities to learning platforms. Because there are so many, employers have no way of knowing which ones are the most rigorous.  While an employer may view a certificate as an example of an eagerness to continue learning, they won’t see it as a demonstration of skills or abilities. The best way to showcase your skills properly is with projects and a robust portfolio.


SQL 0x80004005  Description: "Cannot continue the ...
Category: SQL

Question How do you solve for t ...


Views: 0 Likes: 42
Linked Server and SSIS
Category: Servers

<a href="https//docs.microsoft.com/en-us/sql/relational-databases/linked-servers/create-linked-serv ...


Views: 357 Likes: 117
Data Scientist at Qventus | Y Combinator
Data Scientist at Qventus | Y Combinator

The Company Have you ever found yourself or a loved one waiting hours and hours in a hospital Emergency Room to get care? Or have you ever had a surgery scheduled for months in the future that needed to happen sooner? Unfortunately, our healthcare system is full of these types of operational problems. Our work saves lives and helps hospitals cut tens of millions of dollars in operational costs, while improving the quality of care they’re able to deliver. Qventus is a real-time decision making platform for hospital operations. Our mission is to simplify how healthcare operates, so that hospitals and caregivers can focus on delivering the best possible care to patients. We use artificial intelligence and machine learning to create products that help nurses, doctors, and hospital staff anticipate issues and make operational decisions proactively. Qventus works with leading public, academic and community hospitals across the United States. The company was recognized by the 2019 Black Book Awards in healthcare for patient flow and by CB Insights as a 2019 top 100 Most Promising Company in Artificial Intelligence. Recently, Qventus won the Robert Wood Johnson Foundation Emergency Response for the Healthcare System Innovation Challenge through its work helping health systems across the country plan for and operate in the COVID pandemic. The Role Qventus is looking for Data Scientists to help build the next generation of Qventus’s AI. You’ll join a cross-functional team of clinicians, data scientists, data platformers, and product experts to help care teams across the country make day to day decisions to get patients the right care faster and with less overhead. As a data scientist at Qventus, you will have the opportunity to explore Qventus’s unique and rich Healthcare dataset to develop and deploy cutting edge ML based solutions. You will evaluate potential modeling approaches, build features together with data platform partners, implement algorithms, and drive meaningful improvements to our underlying ML infrastructure to scale Qventus into the next generation. You should be strongly motivated to have an impact in the company and dedicated to helping improve the quality of healthcare operations. Location This role is remote first, but we have offices in Mountain View, CA and open to local hires as well. Key Responsibilities - Drive the development of our machine learning platform to efficiently train, evaluate, and deploy high quality models. - Develop, deploy, and tune performant and highly scalable machine learning models in the healthcare space strategically employing a wide array of modeling and statistical techniques. - Collaborate with Product partners to experiment, design and measure quality of model based interventions including development of analytics dashboards. - Develop tools and resources to improve transparency into Data Science technical architecture and increase collaboration with engineering and analytics partners. Key Qualifications - Proven ability to develop and tailor algorithmic solutions to business problems in collaboration with product or delivery partners. - 3+ years industry experience developing, launching, and iterating on machine learning models and/or developing the core data science platform. - High competency in Python, with experience developing scalable systems and using statistical packages such as Pandas, Scikit-learn, and XGBoost. - Strong software development foundations - dedication to high code quality, stable architecture and an eye towards maintainability. - Excellent SQL - hands on experience manipulating data sets, data cleaning, and pipelines Interest and ability in learning and working in a fast paced dynamic environment across multiple technologies. Nice to have's - Strong cross-functional communication - ability to break down complex technical components for technical and non-technical partners alike - Demonstrated understanding of a wide variety of statistical and machine learning methods (resampling, regression, classification, ensemble methods, transfer learning, etc). Practical hands on experience with - Natural Language Processing or Understanding techniques and productionalization - Deep Learning modeling techniques and infrastructure - Ecosystem of data platform technologies such as AWS Services (RDS, ECS/EKS, Lambda, S3), DBT, Snowflake, Databricks - Data analytics tools (Looker preferred) - Experience in healthcare working with real world data, particularly in the inpatient setting. - Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent training / experience Qventus is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. Candidate information will be treated in accordance with our candidate privacy notice which can be found here https//qventus.com/ccpa-privacy-notice/


Anomaly Detection Using Sigma Rules: Build Your Own Spark Streaming Detections
Anomaly Detection Using Sigma Rules Build Your Ow ...

Easily deploy Sigma rules in Spark streaming pipelines a future-proof solution supporting the upcoming Sigma 2 specificationPhoto by Dana Walker on UnsplashIn our previous articles we elaborated and designed a stateful function named flux-capacitor.The flux-capacitor stateful function that can remember parent-child (and ancestor) relationships between log events. It can also remember events occurring on the same host in a certain window of time, the Sigma specification refers to this as temporal proximity correlation .For a deep-dive into the design of flux-capacitor refer to part 1 , part 2, part 3, part 4, and part5. However, you don’t need to understand the implementation of the function to use it.In this article we first show a Spark streaming job which performs discreet detections. A discreet detection is a Sigma rule which uses the features and values of a single log line (a single event).Then we leverage the flux-capacitor function to handle stateful parent-child relationships between log events. The flux-capacitor is also able to detect a number of events occurring on the same host in a certain window of time; these are called temporal proximity correlation in the upcoming Sigma specification. A complete demo of these spark streaming jobs is available in our git repo .Discreet DetectionsPerforming discrete tests is fairly straightforward, thanks to all the built-in functions that come out-of-the-box in Spark. Spark has support for reading streaming sources, writing to sinks, checkpointing, stream-stream joins, windowed aggregations and many more. For a complete list of the possible functionalities, see the comprehensive Spark Structured Streaming Programming Guide.Here’s a high level diagram showing a Spark streaming job that consumes events from an Iceberg table of “start-process” windows events (1). A classic example of this is found in Windows Security Logs (Event ID 4688).Topology for discrete detectionsThe source table (1) is named process_telemetry_table. The Spark job reads all events, detects anomalous events, tags these events and writes them to table (3) named tagged_telemetry_table. Events deemed anomalous are also written to a table (4) containing alerts.Periodically we poll a git repository (5) containing the SQL auto-generated from the Sigma rules we want to apply. If the SQL statements change, we restart the streaming job to add these new detections to the pipeline.Let’s take this Sigma rule as an examplescreenshot from proc_creation_win_rundll32_sys.yml at Sigma HQThe detection section is the heart of the Sigma rule and consists of a condition and 1 or more named tests. The selection1 and selection2 are named boolean tests. The author of the Sigma rule can give meaniningful names to these tests. The condition is where the user can combine the tests in a final evaluation. See the Sigma specification for more details on writing a Sigma rule.From now on we will refer to these named boolean tests as tags.The inner workings of the Spark streaming job is broken down into 4 logical stepsread the source table process_telemetry_tableperform pattern matchingevaluate final conditionwrite the resultsThe Pattern Match step consist of evaluating the tags found in the Sigma rule and the Eval final condition evaluates thecondition.On the right of this diagram we show what the row would look like at this stage of processing. The columns in blue represent values read from the source table. The Pattern Match step adds a column named Sigma tags which is a map of all the tests performed and whether the test passed or failed. The gray column contains the final Sigma rule evaluations. Finally, the brown columns are added in the foreachBatch function. A GUID is generated, the rule names that are true are extracted from the Sigma tags map and the detection action is retrieved from a lookup map of rule-name to rule-type. This gives context to the alerts produced.This diagram depicts how attributes of the event are combined into tags, final evaluation and finally contextual information.Let’s now look at the actual pyspark code. First, we connect spark to the source table using the readStream function and specifying the name from which the iceberg table is read. The load function returns a dataframe, which we use to create a view named process_telemetry_view.spark .readStream .format("iceberg") .option("stream-from-timestamp", ts) .option("streaming-skip-delete-snapshots", True) .option("streaming-skip-overwrite-snapshots", True) .load(constants.process_telemetry_table) .createOrReplaceTempView("process_telemetry_view")The data in the process_telemetry_view looks like this+-------------------+---+---------+---------------------+ |timestamp |id |parent_id|Commandline |+-------------------+---+---------+---------------------+|2022-12-25 000001|11 |0 | ||2022-12-25 000002|2 |0 |c\winotepad.exe ||2022-12-25 000003|12 |11 | ||2022-12-25 000008|201|200 |cmdline and args ||2022-12-25 000009|202|201 | ||2022-12-25 000010|203|202 |c\test.exe |+-------------------+---+---------+---------------------+On this view we apply a Pattern Matching step which consists of an auto-generated SQL statement produced by the Sigma compiler. The patern_match.sql file looks like thisselect *, -- regroup each rule's tags in a map (ruleName -> Tags) map( 'rule0', map( 'selection1', (CommandLine LIKE '%rundll32.exe%'), 'selection2', (CommandLine LIKE '%.sys,%' OR CommandLine LIKE '%.sys %'), ) ) as sigmafrom process_telemetry_viewWe use spark.sql() to apply this statement to the process_telemetry_view view.df = spark.sql(render_file("pattern_match.sql"))df.createOrReplaceTempView("pattern_match_view")Notice that the results of each tag found in the Sigma rule are stored in a map of boolean values. The sigma column holds the results of each tag found in each Sigma rule. By using a MapType we can easily introduce new Sigma rules without affecting the schema of the table. Adding a new rule simply adds a new entry in the sigmacolumn (a MapType) .+---+---------+---------------------+----------------------------------+|id |parent_id|Commandline |sigma+---+---------+---------------------+----------------------------------+|11 |0 | |{rule0 -> { selection1 -> false, selection2 -> false }, }Similarly, the Eval final condition step applies the conditions from the Sigma rules. The conditions are compiled into an SQL statement, which use map, map_filter, map_keys, to build a column named sigma_final. This column holds the name of all the rules that have a condition that evaluates to true.select *, map_keys( -- only keep the rule names of rules that evaluted to true map_filter( -- filter map entries keeping only rules that evaluated to true map( -- store the result of the condition of each rule in a map 'rule0', -- rule 0 -> condition all of selection* sigma.rule0.selection1 AND sigma.rule0.selection2) ) , (k,v) -> v = TRUE)) as sigma_finalfrom pattern_match_viewThe auto-generated statement is applied using spark.sql().df = spark.sql(render_file("eval_final_condition.sql"))Here’s the results with the newly added sigma_final column, an array of rules that fire.+---+---------+-------------------------------------+-------------+|id |parent_id|sigma | sigma_final |+---+---------+-------------------------------------+-------------+|11 |0 |{rule0 -> { | [] | selection1 -> false, selection2 -> false } }We are now ready to start the streaming job for our dataframe. Notice that we pass in a call back function for_each_batch_function to the foreachBatch.streaming_query = ( df .writeStream .queryName("detections") .trigger(processingTime=f"{trigger} seconds") .option("checkpointLocation", get_checkpoint_location(constants.tagged_telemetry_table) ) .foreachBatch(foreach_batch_function) .start() )streaming_query.awaitTermination()The for_each_batch_function is called at every micro-batch and is given the evaluated batchdf dataframe. The for_each_batch_function writes the entirety of batchdf to the tagged_telementry_table and also writes alerts for any of the Sigma rules that evaluated to true.def foreach_batch_function(batchdf, epoch_id) # Transform and write batchDF batchdf.persist() batchdf.createOrReplaceGlobalTempView("eval_condition_view") run("insert_into_tagged_telemetry") run("publish_suspected_anomalies") spark.catalog.clearCache()The details of insert_into_tagged_telemetry.sql and publish_suspected_anomalies.sql can be found in our git repo.As mentioned above, writing a streaming anomaly detection handling discreet test is relatively straightforward using the built-in functionality found in Spark.Detections Base on Past EventsThus far we showed how to detect events with discrete Sigma rules. In this section we leverage the flux-capacitor function to enable caching tags and testing tags of past events. As discussed in our previous articles, the flux-capacitor lets us detect parent-child relationships and also sequences of arbitrating features of past events.These types of Sigma rules need to simultaneously consider the tags of the current event and of past events. In order to perform the final rule evaluation, we introduce a Time travel tags step to retrieve all of past tags for an event and merge them with the current event. This is what the flux-capacitor function is designed to do, it caches and retrieves past tags. Now that past tags and current tags are on the same row, the Eval final condition can be evaluated just like we did in our discreet example above.The detection now looks like thisThe flux-capacitor is given the Sigma tags produced by the Pattern Match step. The flux-capacitor stores these tags for later retrieval. The column in red has the same schema as the Sigma tags column we used before. However, it combines current and past tags, which the flux-capacitor retrieved from its internal state.Adding caching and retrieval of past tags is easy thanks to the flux-capacitor function. Here’s how we applied the flux-capacitor function in our Spark anomaly detection. First, pass the dataframe produced by the Pattern Match step to the flux_stateful_function and the function returns another dataframe, which contains past tags.flux_update_spec = read_flux_update_spec()bloom_capacity = 200000# reference the scala codeflux_stateful_function = spark._sc._jvm.cccs.fluxcapacitor.FluxCapacitor.invoke# group logs by host_idjdf = flux_stateful_function( pattern_match_df._jdf, "host_id", bloom_capacity, flux_update_spec)output_df = DataFrame(jdf, spark)To control the behavior of the flux_stateful_function we pass in a flux_update_spec. The flux-capacitor specification is a yaml file produced by the Sigma compiler. The specification details which tags should be cached and retrieved and how they should be handled. The action attribute can be set to parent, ancestor or temporal.Let’s use a concrete example from Sigma HQ proc_creation_win_rundll32_executable_invalid_extension.ymlscreenshot from Sigma HQ githubAgain the heart of the detection consists of tags and of a final condition which puts all these tags together. Note however that this rule (that we will refer to as Rule 1) involves tests against CommandLine and also test on the parent process ParentImage. ParentImage is not a field found in the start-process logs. Rather it refers to the Image field of the parent process.As seen before, this Sigma rule will be compiled into SQL to evaluate the tags and to combine them into a final condition.In order to propagate the parent tags, the Sigma compiler also produces a flux-capacitor specification. Rule 1 is a parent rule and thus the specification must specify what are the parent and child fields. In our logs these correspond to id and parent_id.The specification also specifies which tags should be cached and retrieved by the flux-capacitor function. Here is the auto-generated specificationrules - rulename rule1 description proc_creation_win_run_executable_invalid_extension action parent tags - name filter_iexplorer - name filter_edge_update - name filter_msiexec_system32 parent parent_id child idNote Rule 0 is not included in the flux-capacitor function since it has no temporal tags.Illustrating Tag PropagationIn order to better understand what the flux-capacitor does, you can use the function outside a streaming analytic. Here we show a simple ancestor example. We want to propagate the tag pf. For example pf might represent a CommandLine containing rundll32.exe.spec = """ rules - rulename rule2 action ancestor child pid parent parent_pid tags - name pf """df_input = spark.sql(""" select * from values (TIMESTAMP '2022-12-30 000005', 'host1', 'pid500', '', map('rule1', map('pf', true, 'cf', false))), (TIMESTAMP '2022-12-30 000006', 'host1', 'pid600', 'pid500', map('rule1', map('pf', false, 'cf', false))), (TIMESTAMP '2022-12-30 000007', 'host1', 'pid700', 'pid600', map('rule1', map('pf', false, 'cf', true))) t(timestamp, host_id, pid, parent_pid, sigma) """)Printing the dataframe df_input we see that pid500 started and had a CommandLine with the pf feature. Then pid500 started pid600. Later pid600 started pid700. Pid700 had a child feature cf.+-------------------+------+----------+--------------+-------------------------------------+|timestamp |pid |parent_pid|human_readable|sigma |+-------------------+------+----------+--------------+-------------------------------------+|2022-12-30 000005|pid500| |[pf] |{rule2 -> {pf -> true, cf -> false}} ||2022-12-30 000006|pid600|pid500 |[] |{rule2 -> {pf -> false, cf -> false}}||2022-12-30 000007|pid700|pid600 |[cf] |{rule2 -> {pf -> false, cf -> true}} |+-------------------+------+----------+--------------+-------------------------------------+The Sigma rule is a combination of both pf and cf. In order to bring the pf tag back on the current row, we need to apply time-travel to the pf tag. Applying the flux-capacitor function to the df_input dataframejdf = flux_stateful_function(df_input._jdf, "host_id", bloom_capacity, spec, True)df_output = DataFrame(jdf, spark)We obtain the df_output dataframe. Notice how the pf tag is propagated through time.+-------------------+------+----------+--------------+------------------------------------+|timestamp |pid |parent_pid|human_readable|sigma |+-------------------+------+----------+--------------+------------------------------------+|2022-12-30 000005|pid500| |[pf] |{rule2 -> {pf -> true, cf -> false}}||2022-12-30 000006|pid600|pid500 |[pf] |{rule2 -> {pf -> true, cf -> false}}||2022-12-30 000007|pid700|pid600 |[pf, cf] |{rule2 -> {pf -> true, cf -> true}} |+-------------------+------+----------+--------------+------------------------------------+This notebook TagPropagationIllustration.ipynb contains more examples like this for parent-child and temporal proximity.Building Alerts with ContextThe flux-capacitor function caches all the past tags. In order to conserve memory, it caches these tags using bloom filter segments. Bloom filters have an extremely small memory footprint, are quick to query and to update. However, they do introduce possible false positive. It is thus possible that one of our detections is in fact a false positive. In order to remedy this we put the suspected anomalies in a queue (4) for re-evaluation.To eliminate false positives, the second Spark streaming job named the Alert Builder reads the suspected anomalies (5) and retrieves the events (6) that are required to re-evaluate the rule.For example in the case of a parent-child Sigma rule, the Alert Builder will read the suspected anomaly (5) retrieving a child process event. Next, in (6) it will retrieve the parent process of this child event. Then using these two events it re-evaluates the Sigma rule. However, this time the flux-capacitor is configured to store tags in a hash map, rather than in bloom filters. This eliminates false positives and as a bonus we have all the events involved in this detection. We store this alert along with the rows of evidence (parent and child events) into an alert table (7).Topology with stateful detections (temporal)The Alert Builder handles a fraction of the volume processed by (2) the Streaming Detections. Thanks to the low volume read in (5) historical searches into the tagged telemetry (6) are possible.For a more in-depth look, take a look at the Spark jobs for the Streaming Detections streaming_detections.py and the Alert Builder streaming_alert_builder.pyPerformanceTo evaluate the performance of this proof of concept we ran tests on machines with 16 CPU and 64G of ram. We wrote a simple data producer that creates 5,000 synthetic events per seconds and ran the experiment for 30 days.The Spark Streaming Detections job runs on one machine. The job is configured to trigger every minute. Each micro-batch (trigger) reads 300,000 events and takes on average 20 seconds to complete. The job can easily keep up with the incoming events rate.Spark Streaming DetectionsThe Spark Alert Builder also runs on a single machine and is configured to trigger every minute. This job takes between 30 and 50 seconds to complete. This job is very sensitive to organization of the tagged_telemetry_table . Here we see the effect of the maintenance job which organizes and sorts the latest data at every hour. Thus at every hour, the Spark Alert Builder’s micro-batch execution time drops back to 30 seconds.Spark Streaming Alert BuilderTable MaintenanceOur Spark streaming jobs trigger every minute and thus produce small data files every minute. In order to allow for fast searches and retrieval in this table, it’s important to compact and sort the data periodically. Fortunately Iceberg comes with built-in procedures to organize and maintain your tables.For example this script maintenance.py runs every hour to sort and compact the newly added files of the Iceberg tagged_telemetry_table.CALL catalog.system.rewrite_data_files( table => 'catalog.jc_sched.tagged_telemetry_table', strategy => 'sort', sort_order => 'host_id, has_temporal_proximity_tags', options => map('min-input-files', '100', 'max-concurrent-file-group-rewrites', '30', 'partial-progress.enabled', 'true'), where => 'timestamp >= TIMESTAMP \'2023-05-06 000000\' ' )At the end of the day we also re-sort this table, yielding maximum search performance over long search periods (months of data).CALL catalog.system.rewrite_data_files( table => 'catalog.jc_sched.tagged_telemetry_table', strategy => 'sort', sort_order => 'host_id, has_temporal_proximity_tags', options => map('min-input-files', '100', 'max-concurrent-file-group-rewrites', '30', 'partial-progress.enabled', 'true', 'rewrite-all', 'true'), where => 'timestamp >= TIMESTAMP \'2023-05-05 000000\' AND timestamp < TIMESTAMP \'2023-05-06 000000\' ' )Another maintenance task we do is deleting old data from the streaming tables. These tables are only used as buffers between producers and consumers. Thus every day we age off the streaming tables keeping 7 days of data.delete from catalog.jc_sched.process_telemetry_tablewhere timestamp < current_timestamp() - interval 7 daysFinally, every day we perform standard Iceberg table maintenance tasks, like expiring snapshots and removing orphan files. We run these maintenance jobs on all of our tables and schedule these jobs on Airflow.ConclusionIn this article we showed how build a Spark streaming anomaly detection framework that generically applies Sigma rules. New Sigma rules can easily be added to the system.This proof of concept was extensively tested on synthetic data to evaluate its stability and scalability. It shows great promise and further evaluation will be performed on a production system.All images unless otherwise noted are by the authorAnomaly Detection Using Sigma Rules Build Your Own Spark Streaming Detections was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.


How to optimize sql query in Microsoft SQL Server
Category: SQL

1. Keep in mind that when you write a Store Procedure SQL Server generates an SQL plan. If you ha ...


Views: 463 Likes: 102
How to Insert two corresponding columns into a tem ...
Category: Other

Question How do you insert two columns corresponding to each other in a temp ta ...


Views: 0 Likes: 9
Access Database IF Statement
Category: Databases

  Funny story, so when I was working on a division by 0 error ...


Views: 318 Likes: 100
Can not connect to SQL server in docker container ...
Category: Docker

Problem&nbsp; &nbsp; The challenge was to connect to an SQL Server Instan ...


Views: 2004 Likes: 93
Founding Fullstack Engineer at Writesonic | Y Combinator
Founding Fullstack Engineer at Writesonic | Y Comb ...

## About Writesonic [Writesonic](http//writesonic.com/) is an **AI-powered creativity platform** that generates marketing content so good you can’t believe it wasn’t written by a human. With a few lines of text, Writesonic will generate ads, blog posts, landing pages, product descriptions, and 40 other types of content. We are saving agencies, eCommerce brands, and marketing team's about 80% of the time and effort it would take them to do the job themselves. And we're decreasing their costs dramatically! Our customers love us, as evidenced by the 2000+ 5-star reviews on platforms like G2, TrustPilot, and Capterra. We are a seed-stage company, well funded by top investors like Y Combinator, HOF Capital, Broom Ventures, Amino Capital, etc., and are building a real business fueled by real revenue — not just venture capital. We're a small, creative, and hard-working team looking for people with a founder mindset to join us. ## About the role We are expanding the functionality of our application based on market requirements and customer feedback, and are also building integrations with platforms like Shopify, Wordpress, etc. So, we are looking for talented full stack engineers who can understand our domain, research the market, own the application and drive development from the ground up. This position says founding engineer. We really mean that. You'd build critical features from the ground up, and deliver 5x faster than you have at previous jobs. We're looking for someone who is serious about building, and who can wear many hats. Your day-to-day work will directly influence how customers and partners interact with our products. ## ?? **Who you are** - You have designed, built, and maintained production-grade web applications. People describe you as "extremely productive." - You have a strong grasp of engineering fundamentals. - You are excited about working on a small team and helping us set the long term vision for our engineering organization and product direction. - You have experience in areas such as databases, UX implementation, debugging, and full-stack performance measurement and optimization. - You're flexible, trustworthy, energetic, and a great communicator. - You care more about solving customers' problems and building a delightful experience than about using a particular technology to do it. - **Big bonus points** for AI, design and/or devops experience. ## ?? What you'll do - Take ownership of Writesonic's full-stack web platform, all the way to deploys, monitoring, debugging, and overall reliability. - Research the market and competition, and come up with new features or ways to improve the system. - Help design and implement core architecture. - Ship major features often, iterate on those features, and build out our core infrastructure. - Be exposed to every level of the company, working closely with the CEO to meet business goals and working off customer feedback. ### **Required Technical Skills and Qualifications** - **3+ years of experience in a software engineering role at a product-based company or startup.** - You have a strong background in computer science with a degree in CS or a related field from a reputed institute like IIT (Delhi, Mumbai, Kanpur, Chennai, Roorkee), IIIT (hyderabad or Delhi), BITS Pilani. - Experience with building, deploying, and scaling production-level web applications. - Experience with the microservices architecture. - Experience in developing REST APIs to serve web clients, preferably using Python based web frameworks like FastAPI, Flask etc. - Experience building modern frontends using ReactJs/VueJs or similar. - Experience putting high-fidelity custom UI/UX designs into code using HTML/CSS/JS. - Experience with SQL-based relational databases. - Experience building public facing websites that work elegantly across commonly used browsers ### **? Bonus Points** - Proven experience with design, architecture, and UI / UX principles. ## Technology **Frontend** - NextJs/React with Redux - TypeScript - TailwindCSS **Backend** - Python (FastAPI) - PostgreSQL - Kubernetes for deployment and orchestration


Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]