Settings Results in 4 milliseconds

What is New in C-Sharp 9 Programming Language
Category: .Net 7

C# is one of the high-level programming languages, it is used in many business applications, Game ...


Views: 278 Likes: 102
 C# 9 Deep Dive: Target Typing and Covariant Returns
C# 9 Deep Dive Target Typing and Covariant Retur ...

We’ve been quite busy, my friends. In this C# 9 deep dive series, we’ve looked at init-only features, records, pattern matching, and then top-level programs. To complete this series (before showing off everything in a single app), we’ll discuss the last two items featured in the Build 2020 announcement target typing and covariant returns. These are not related, but I’ve decided to bundle these in a single blog post.This is the fifth post in a six-post series on C# 9 features in-depthPost 1 - Init-only featuresPost 2 - RecordsPost 3 - Pattern matchingPost 4 - Top-level programsPost 5 (this post) - Target typing and covariant returnsPost 6 - Putting it all together with a scavenger huntThis post covers the following topics.Improved target typingTarget-typed new expressionsTarget typing with conditional operatorsCovariant returnsWrapping upImproved target typingC# 9 includes improved support for target typing. What is target typing, you say? It’s what C# uses, normally in expressions, for getting a type from its context. A common example would be the use of the var keyword. The type can be inferred from its context, without you needing to explicitly declare it.The improved target typing in C# 9 comes in two flavors new expressions and target-typing ?? and ?.Target-typed new expressionsWith target-typed new expressions, you can leave out the type you instantiate. At first glance, this appears to only work with direct instantiation and not coupled with var or constructs like ternary statements.Let’s take a condensed Person class from previous postspublic class Person { private string _firstName; private string _lastName; public Person(string firstName, string lastName) { _firstName = firstName; _lastName = lastName; } } To instantiate a new Person, you can omit the type on the right-hand side of the equality statement.class Program { static void Main(string[] args) { Person person = new ("Tony", "Stark"); } } A big advantage to target-typed new expressions is when you are initializing new collections. If I wanted to create a list of multiple Person objects, I wouldn’t need to worry about including the type every time I create a new object.With the same Person class in place, you can change the Main function to do thisclass Program { static void Main(string[] args) { var personList = new List<Person> { new ("Tony", "Stark"), new ("Howard", "Stark"), new ("Clint", "Barton"), new ("Captain", "America") // ... }; } } Target typing with conditional operatorsSpeaking of ternary statements, we can now infer types by using the conditional operators. This works well with ??, the null-coalescing operator. The ?? operator returns the value of what’s on the left if it is not null. Otherwise, the right-hand side is evaluated and returned.So, imagine we have some objects that shared the same base class, like thispublic class Person { private string _firstName; private string _lastName; public Person(string firstName, string lastName) { _firstName = firstName; _lastName = lastName; } } public class Student Person { private string _favoriteSubject; public Student(string firstName, string lastName, string favoriteSubject) base(firstName, lastName) { _favoriteSubject = favoriteSubject; } } public class Superhero Person { private string _maxSpeed; public Superhero(string firstName, string lastName, string maxSpeed) base(firstName, lastName) { _maxSpeed = maxSpeed; } } While the code below does not get past the compiler in C# 8, it will in C# 9 because there’s a target (base) type that is convert-ablestatic void Main(string[] args) { Student student = new Student ("Dave", "Brock", "Programming"); Superhero hero = new Superhero ("Tony", "Stark", "10000"); Person anotherPerson = student ?? hero; } Covariant returnsIt has been a long time, coming—almost two decades of begging and pleading, actually. With C# 9, it looks like return type covariance is finally coming to the language. You can now say bye-bye to implementing some interface workarounds. OK, so just saying return type covariance makes me sound super smart, but what is it?With return type covariance, you can override a base class method (that has a less-specific type) with one that returns a more specific type.Before C# 9, you would have to return the same type in a situation like thispublic virtual Person GetPerson() { // this is the parent (or base) class return new Person(); } public override Person GetPerson() { // you can return the child class, but still return a Person return new Student(); } Now, you can return the more specific type in C# 9.public virtual Person GetPerson() { // this is the parent (or base) class return new Person(); } public override Student GetPerson() { // better! return new Student(); } Wrapping upIn this post, we discussed how C# 9 makes improvements with target types and covariant returns. We discussed target-typing new expressions and their benefits (especially when initializing collections). We also discussed target typing with conditional operators. Finally, we discussed the long-awaited return type covariance feature in C# 9.


 How to use configuration with C# 9 top-level programs
How to use configuration with C# 9 top-level prog ...

I’ve been working with top-level programs in C# 9 quite a bit lately. When writing simple console apps in .NET 5, it allows you to remove the ceremony of a namespace and a Main(string[] args) method. It’s very beginner-friendly and allows developers to get going without worrying about learning about namespaces, arrays, arguments, and so on. While I’m not a beginner—although I feel like it some days—I enjoy using top-level programs to prototype things quickly.With top-level programs, you can work with normal functions, use async and await, access command-line arguments, use local functions, and more. For example, here’s me working with some arbitrary strings and getting a random quote from the Ron Swanson Quotes APIusing System; using System.Net.Http; var name = "Dave Brock"; var weekdayHobby = "code"; var weekendHobby = "play guitar"; var quote = await new HttpClient().GetStringAsync("https//ron-swanson-quotes.herokuapp.com/v2/quotes"); Console.WriteLine($"Hey, I'm {name}!"); Console.WriteLine($"During the week, I like to {weekdayHobby} and on the weekends I like to {weekendHobby}."); Console.WriteLine($"A quote to live by {quote}"); Add configuration to a top-level programCan we work with configuration with top-level programs? (Yes, should we is a different conversation, of course.)To be clear, there are many, many ways to work with configuration in .NET. If you’re used to it in ASP.NET Core, for example, you’ve most likely done it from constructor dependency injection, wiring up a ServiceCollection in your middleware, or using the Options pattern—so you may think you won’t be able to do it with top-level programs.Don’t overthink it. Using the ConfigurationBuilder, you can easily use configuration with top-level programs.Let’s create an appsettings.json file to replace our hard-coded values with configuration values.{ "Name" "Dave Brock", "Hobbies" { "Weekday" "code", "Weekend" "play guitar" }, "SwansonApiUri" "https//ron-swanson-quotes.herokuapp.com/v2/quotes" } Then, make sure your project file has the following packages installed, and that the appSettings.json file is being copied to the output directory <ItemGroup> <PackageReference Include="Microsoft.Extensions.Configuration" Version="5.0.0" /> <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="5.0.0" /> <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="5.0.0" /> </ItemGroup> <ItemGroup> <None Update="appsettings.json"> <CopyToOutputDirectory>Always</CopyToOutputDirectory> </None> </ItemGroup> In your top-level program, create a ConfigurationBuilder with the appropriate valuesvar config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json") .Build(); With a config instance, you’re ready to simply read in your valuesvar name = config["Name"]; var weekdayHobby = config.GetSection("HobbiesWeekday"); var weekendHobby = config.GetSection("HobbiesWeekend"); var quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]); And here’s the entire top-level program in actionusing Microsoft.Extensions.Configuration; using System; using System.IO; using System.Net.Http; var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json") .Build(); var name = config["Name"]; var weekdayHobby = config.GetSection("HobbiesWeekdays"); var weekendHobby = config.GetSection("HobbiesWeekends"); var quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]); Console.WriteLine($"Hey, I'm {name}!"); Console.WriteLine($"During the week, I like to {weekdayHobby.Value}" + $" and on the weekends I like to {weekendHobby.Value}."); Console.WriteLine($"A quote to live by {quote}"); Review the generated codeWhen throwing this in the ILSpy decompilation tool, you can see there’s not a lot of magic here. The top-level program is merely wrapping the code in a Main(string[] args) method and replacing our implicit typingusing System; using System.IO; using System.Net.Http; using System.Runtime.CompilerServices; using System.Threading.Tasks; using Microsoft.Extensions.Configuration; [CompilerGenerated] internal static class <Program>$ { private static async Task <Main>$(string[] args) { IConfigurationRoot config = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json").Build(); string name = config["Name"]; IConfigurationSection weekdayHobby = config.GetSection("HobbiesWeekday"); IConfigurationSection weekendHobby = config.GetSection("HobbiesWeekend"); string quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]); Console.WriteLine("Hey, I'm " + name + "!"); Console.WriteLine("During the week, I like to " + weekdayHobby.Value + " and on the weekends I like to " + weekendHobby.Value + "."); Console.WriteLine("A quote to live by " + quote); } } Wrap upIn this quick post, I showed you how to work with configuration in C# 9 top-level programs. We showed how to use a ConfigurationBuilder to read from an appsettings.json file, and we also reviewed the generated code.


How to Develop Drupal 8, PHP and MySQ on Windows 1 ...
Category: Linux

This document is still in editWhen developing using PHP7.2-fpm (A highly PHP Engine for ...


Views: 270 Likes: 89
 C# 9: Answering your questions
C# 9 Answering your questions

Note Originally published five months before the official release of C# 9, I’ve updated this post after the release to capture the latest updates.In the last month or so, I’ve written almost 8,000 words about C# 9. That seems like a lot (and it is!) but there is so much to cover! I’ve talked about how it reduces mental energy, simplifies null validation, and took on a deep dive series featuring init-only features, records, pattern matching, top-level programs, and target typing and covariant returns.After publishing all these posts, I received a lot of great questions in my Disqus comment section. Instead of burying the conversations there, I’d like to discuss these questions in case you missed them. I learned a lot from the questions, so thank you all!Init-only featuresFrom the post on init-only features, we had two questionsFernando Margueirat asks What’s the difference between init and readonly?The big difference is that with C# 9 init-only properties you are allowed to use object initialization. With readonly properties, you cannot.The Microsoft announcement says “The one big limitation today is that the properties have to be mutable for object initializers to work … first call the object’s constructor and then assigning to the property setters.” Because readonly value types are immutable, you can’t use them with object initializers.saint4eva asks Can a get-only property provide the same level of immutability as an init-only property?Similar to the last question, using init-only properties allow for initialization, while get-only properties are read-only and do not.RecordsFrom the post on record types, we also had one and a half questionsWOBFIE says So we should use so called “records” just because… some monkey encoded “struct” as “class”?!OK, this is less of a question and more of something that made me laugh (and why I say this is half of a question, as much as I love all my readers). But let’s read between the lines of what WOBFIE might be thinking—something along the lines of this being a hacked together struct?In the post itself, I explained the rationale for adding a new construct over building on top of struct.An easy, simplified construct whose intent is to use as an immutable data structure with easy syntax, like with expressions to copy objectsRobust equality support with Equals(object), IEquatable<T>, and GetHashCode()Constructor/deconstructor support with simplified positional recordsThe endgame is not to complicate workarounds—it is to devote a construct for immutability that doesn’t require a lot of wiring up from your end.Daniel DF says I would imagine that Equals performance decreases with the size of the record particularly when comparing two objects that are actually equal. Is that true?That is a wonderful question. Since I was unsure, I reached out to the language team on their Gitter channel. I got an answer within minutes, so thanks to them!Here is what Cyrus Najmabadi saysEquals is spec’ed at the language to do pairwise equality of the record members.they have value-semanticsin general, the basic approach of implementing this would likely mean you pay more CPU for equality checks. Though the language doesn’t concern itself with that. It would be an implementation detail of the compiler.Target typing and covariant returnsFrom my post on target typing and covariant returns, we had one question from two different readers.Pavel Voronin and saint4eva ask Are covariant return types a runtime feature or is it just a language sugar?This was another question I sent to the team. Short answer covariant return types are a runtime feature.Long answer it could have been implemented as syntactic sugar only using stubs—but the team was concerned about it leading to worse performance and increased code bloat when working with a lot of nested hierarchies. Therefore, they went with the runtime approach.Also, while in the Gitter channel I learned that covariant returns are only supported for classes as of now. The team will look to address interfaces at a later time.


Solved!! Visual Studio 2019 Shows a big Cursor and ...
Category: Technology

Problem When coding in Visual Studio, all of the sudden you see a big cursor and when you start ...


Views: 800 Likes: 98
SQL Server Tips and Tricks
Category: SQL

Error Debugging Did you know you could double click on the SQL Error and ...


Views: 0 Likes: 44
Grasping Loc and iLoc in Pandas
Grasping Loc and iLoc in Pandas

I am going to attempt to help you in grasping Loc and iLoc in Pandas. I set out to describe these two concepts because I and others have struggled with them. When I received my Aha! moment, I knew then it was time to write about. So let's get you over the hump of learning these two constructs!I am assuming that since you are reading this, you are a beginner (or close to it) in Python/Pandas. You probably have been struggling with learning about loc and iloc. Based on this assumption, I am going to deviate from my usual format - that is, I am not going to include the code on GitHub or the data for that matter. Don't Know Pandas? No worries. You can view one of the best ways to learn and get started immediately.The examples I show are short enough for you to type. And typing in the examples yourself will help your brain with the learning process. If I give you the code, you may be  tempted (like I often am) to simply try and learning from scanning the code with your eyes. Interactive learning, especially in coding, is really the fastest way to reinforce concepts. For this reason, I am going to ask you to enter manually the code examples. You'll thank me for it later!Scenario Customer SalesYou are a data analyst helping out a e-commerce business owner. The owner just started out but wants to have a tool that can help them determine sales of its customers. To start, you want to help your client find specific customers. But eventually, you can aggregate sales by either region or income strata. But first, you need to help your client find those customers. And to do that, we'll need to load sales data. Here is the segment of code that can be used to create the DataFrame for this tutorialdata = {'CustomerID'['X1000', 'X1010', 'X1020', 'X1030', 'X1040', 'X1050'],'Name'['John','Ann','Joe','Alice','Susan','Bill'],'Age'[30, 19, 25, 53, 38, 68],'Region'['North','North','South','East','South','West'],'Income Strata'['High','Medium','Medium','Low','Medium','High'],'Sales'[250, 5000, 132, 400, 780, 223]}row_labels = [101,102,103,104,105,106] After you load the data, you'll need to put it into a DataFramesales = pd.DataFrame(data, index=row_labels)You may be wondering why I chose the row_labels numbers shown above (sequenced from 101-106). It's because most people use sequences starting with 0, which makes it seem like it's the same as  the index positions of each of the rows.Remember though, when you set an index like we did with row_labels, that is not the position of the index. It is just a label. This is crucial to understand!Many tutorials will start with sequential numbers starting with 0. This is the cause of the confusion. It makes newcomers believe that the labels are the same as the position of the index and that is not the case. Below, I will create 0-based labels and you'll see for yourself how that is not the same as the positional numbers!Related Don't know what a DataFrame is? See how Learning Pandas will Transform Your Data Analysis.How to Access DataYou can probably guess by the title of this tutorial that we'll be using loc and iloc to access our data. The difference between the two is that loc[] accesses data by label and iloc[] accesses data by it's positional index. Know that labels can be numeric and in our case, that is what they are. Labels can also be text-based (more below) and dates. To access labels of DataFrame objects, use .loc[]. To access via the index position of the objects, use .iloc[]. To get the row corresponding to the index lable 101sales.loc[101]NOTE Wherever the row containing the label 101 lands up, using this operation will always locate that row. For instance, if you sort the DataFrame and the row with 101 as the index label lands up in say, the third index position, this operation will still retrieve that row (which is now in the third row.To get the first row (positional) no matter how you alter or sort the DataFramesales.iloc[0]The way the sales DataFrame is configured currently i.e., no sorting or alterations, these two operations will return the same record, i.e., the first one. Challenge For the current configuration of this DataFrame, what happens when you try to access the followingsales.loc[0]{Scroll down for answer} As you can see, you get an error. Hopefully, you can see why. There is no index with the label 0. And .loc[] accesses labels. The only labels available in this current configuration are 101-106..iloc[0] will work because it gets the first record of the dataframe no matter what is there and no matter what label was defined. If this isn't clear, don't worry. We're not done yet. The next part of the tutorial should bring it home.What we are going to do now is to create the labels as 0,1,2,3,4,5. And we'll access the label 0 and the position index 0. Without doing anything else to the dataframe, both of these instructions should point to the same first record.First, we'll create a new DataFrame (from the same original data) called sales1 with the 0-based location indexrow_labels = [0,1,2,3,4,5]sales1 = pd.DataFrame(data, index = row_labels) Two items to observe here. First, we can now access .loc[0]. Why? Because it is part of the index labels that we defined for sales1, i.e., 0-5. Second, as specified, .loc[0] and .iloc[0] point to the same record. Check!Next, let's sort the data so from customerID descending. This will essentially reverse the label index. But see if you can guess what it will do to the positional index. Don't worry. That's what this exercise is all about - to help you see what will happen.Let's create a new DataFrame (sales2) that is a copy of sales1 but sorted by CustomerID descendingsales2 = sales1.sort_values(by = "CustomerID", ascending = Fales).copy() You'll notice the code fragment shows that I have not run the .loc[0] or the .loc[1]. I want you to try to guess based on the data which rows each of these will display. Remember .loc[] is for index labels and .iloc[] is for positional index. Hint I stated earlier that .iloc[0] will ALWAYS point to the first row now matter what you do to the DataFrame (like sort it). However, is the same true for .loc[0]?Here is the result of both instructions The two insructions point to markedly different records. The .loc[0] points to the last record in the DataFrame (because of the sort) and the .iloc[0] points to where? You guessed it - the first row as it always will. So whatever row lands up in the first row due to the operations you perform on the DataFrame, .iloc[0] will access this first record for you. But the positional index of the label-based index will depend on what type of operation you perform on the DataFrame.Related 10 minutes to PandasFor sales1 and sales2, what specific row did .loc[0] return? It returned same row that contained the label 0. But for sales, the label 0 did not exist. Instead, the label for that same record was 101. Let's try this exercise again and this time we'll copy the sales2 DataFrame but sort it by Sales descending. But before we do, let's go over the guidelines set out in this tutorial thus far.loc[0] will return the row that contains the row label called 0. In sales1 and sales2, this corresponded to CustomerID X1000 with name of John who is age 30 in the North region. John is also categorized as a High income individual but he is only responsible for $200 in sales. It did not matter that sales1 was sorted by the label index and sales2 was sorted by CustomerID descending. .loc[0] always found CustomerID X1000 (which corresponds to row label 0)..iloc[0] will always return the first row of the DataFrame - irrespective of what you do to the DataFrame (sort, aggregate, etc.) What will exist in this row will depend on the operations you perform. But it will always return that first record.Based on these guidelines, what do you believe will be returned for .loc[0] and .iloc[0]? Once again, I am not showing the results yet so that you can think about what will be displayed Are you ready for the results?  As expected, the .loc[0] returned that same customer X1000. And since the very first record of this new sort is customer X1010, that sure is the record that is shown when calling .iloc[0].Next StepsHopefully, you got this on the first try. I think this tutorial should give you a leg up on how these two concepts are supposed to work (loc and iloc). But if you didn't catch it this time, please feel free to go through this tutorial a few times until you do get it. But if you feel comfortable, try guessing what would happen if you tried different indexes like .loc[1] and .iloc[3]. Do they return what you think they should?You can also try to add text-based labels or even better, make the CustomerID the index. In fact, let's do that, shall we? Use the set_index("CustomerID") to set the index to the Customer ID. Once again, see if you can guess each of the different .loc and .iloc operations. You should be able to nail this by now! Let's start with the .loc[0]. Will this work? If you did not guess that this operation would fail, you may want to run through this tutorial from the start. But I am guessing that you picked right up on it. The rest of the items will not bomb out with an error (to give you a hint!) Hopefully, you nailed these as well!When you grasp these concepts, you open up a new world for your data analysis. There is more to these commands than is covered here. But when you get your Aha! moment, those other concept will fall into place much quicker. It certainly did for me! Learn Pandas Right Now When you learn Pandas, you open up doors to companies looking for the in-demand skill. Click on the button to learn about a resource that will get you up to speed, quickly! Start Learning Right Here


Asp.Net MVC Development Notes
Category: .Net 7

<a href="https//www.freecodecamp.org/news/an-awesome-guide-on-how-to-build-restful-apis-w ...


Views: 752 Likes: 79
Linux Selecting Time Zone does not change time tim ...
Category: Linux

Question Why is it so hard to change time and synchronize time on Multiple Serv ...


Views: 7 Likes: 39
Short Cut for Creating Constructor in C-Sharp
Category: C-Sharp

It is very helpful when developing software to know the shortcut to implement code snippet. For exam ...


Views: 304 Likes: 86
Start Learning TypeScript with these Short Videos
Start Learning TypeScript with these Short Videos

TypeScript continues to grow in popularity and for good reason. It adds “guard rails” to your code to help you spot issues early on, easily locate problem code, enhance productivity, provide consistency across code, and much more. While there are a lot of TypeScript resources out there to get started learning the language, where can you go to get started quickly without wasting a lot of time? I recently published a series of short videos on TypeScript core concepts that can provide a great starting point. The videos are short, super focused, and many of them use the online TypeScript Playground to demonstrate different concepts. There are a few videos on getting started working with TypeScript locally on your machine as well. Here’s more information about each video. 1. Why Learn TypeScript? Is it worth your time to learn TypeScript? Short answer (in my opinion anyway) is YES! In this video I’ll walk through 5 reasons learning TypeScript is worth the effort. Since these videos are intended to be short I could only cover 5, but there are many additional reasons as well! 2. Adding TypeScript to a VS Code Project How do you get started using TypeScript, writing, and building your code? I’ll walk you through the basics of that process in this video. 3. How to Add WebPack to a TypeScript Project WebPack’s scary right? Well, truth be told it can be intimidating at times, but it’s pretty easy to use it in TypeScript projects. I’ll walk you through the process in this video. 4. Getting Started with TypeScript Types It’s no secret that TypeScript adds “strong typing” into your code (they call it TypeScript for a reason). In this video I’ll explain the primitive data types available and show how you can get started using them. 5. Using Classes in TypeScript Classes are a feature available in JavaScript that can be used to encapsulate your code. They’re not needed for every type of project, but it’s good to know what they’re capable of. In this video I’ll introduce classes and show how they can be used in TypeScript. 6. Using Interfaces in TypeScript In an earlier video I introduced the concept of TypeScript types. In this video, I walk you through how you can use interfaces to build custom types and explain why you may want to do that. Interfaces are “code contracts” that can be used to describe the “shape” of an object, drive consistency across objects, and more. 7. Using Generics with TypeScript Generics are “code templates” that can be reused in your code base. In this video I introduce the concept of generics and show simple examples of how they can be used in TypeScript. Are there more topics that I could have covered? Yep – there’s always more. However, these videos should provide you with a solid starting point to understand core concepts and features. There are a lot of additional resources out there to learn TypeScript (you can start with the docs or the handbook), but I hope these short videos help get you started quickly. I’m personally a huge fan of TypeScript and highly recommend making time to learn it. If you’d like to dive into more details about TypeScript fundamentals, check out the TypeScript Fundamentals video course on Pluralsight that John Papa and I created.


Connect to Database Using CMD
Category: Databases

Sometimes it serves a lot of time to just connect to the database using the command prompt, its easy ...


Views: 379 Likes: 86
Linux Ubuntu Commands that will increase your prod ...
Category: Linux

Important Comman ...


Views: 498 Likes: 70
Introducing Bash for Beginners
Introducing Bash for Beginners

A new Microsoft video series for developers learning how to script.According to Stack Overflow 2022 Developer Survey, Bash is one of the top 10 most popular technologies. This shouldn't come as a surprise, given the popularity of using Linux systems with the Bash shell readily installed, across many tech stacks and the cloud. On Azure, more than 50 percent of virtual machine (VM) cores run on Linux. There is no better time to learn Bash!Long gone are the days of feeling intimidated by a black screen with text known as a terminal. Say goodbye to blindly typing in “chmod 777” while following a tutorial. Say hello to task automation, scripting fundamentals, programming basics, and your first steps to working with a cloud environment via the bash command line.What we’ll be coveringMy cohost, Josh, and I will walk you through everything you need to get started with Bash in this 20-part series. We will provide an overview of the basics of Bash scripting, starting with how to get help from within the terminal. The terminal is a window that lets you interact with your computer’s operating system, and in this case, the Bash shell. To get help with a specific command, you can use the man command followed by the name of the command you need help with. For example, man ls will provide information on the ls command, which is used for listing directories and finding files.Once you’ve gotten help from within the terminal, you can start navigating the file system. You’ll learn how to list directories and find files, as well as how to work with directories and files themselves. This includes creating, copying, moving, and deleting directories and files. You’ll also learn how to view the contents of a file using the cat command.Another important aspect of Bash is environment variables. These are values that are set by the operating system and are used by different programs and scripts. In Bash, you can access these variables using the “$” symbol followed by the name of the variable. For example, $PATH will give you the value of the PATH environment variable, which specifies the directories where the shell should search for commands.Redirection and pipelines are two other important concepts in Bash. Redirection allows you to control the input and output of a command, while pipelines allow you to chain multiple commands together. For example, you can use the “>” symbol to redirect the output of a command to a file, and the “|” symbol to pipe the output of one command to the input of another.When working with files in Linux, you’ll also need to understand file permissions. In Linux, files have permissions that determine who can access them and what they can do with them. You’ll learn about the different types of permissionssuch as read, write, and execute, and how to change them using the chmod command.Next, we’ll cover some of the basics of Bash scripting. You’ll learn how to create a script, use variables, and work with conditional statements, such as "if" and "if else". You’ll also learn how to use a case statement, which is a way to control the flow of execution based on the value of a variable. Functions are another important aspect of Bash scripting, and you’ll learn how to create and use them to simplify your scripts. Finally, you’ll learn about loops, which allow you to repeat a set of commands multiple times.Why Bash mattersBash is a versatile and powerful language that is widely used. Whether you’re looking to automate tasks, manage files, or work with cloud environments, Bash is a great place to start. With the knowledge you’ll gain from this series, you’ll be well on your way to becoming a proficient Bash scripter.Many other tools like programming languages and command-line interfaces (CLIs) integrate with Bash, so not only is this the beginning of a new skill set, but also a good primer for many others. Want to move on and learn how to become efficient with the Azure CLI? Bash integrates with the Azure CLI seamlessly. Want to learn a language like Python? Learning Bash teaches you the basic programming concepts you need to know such as flow control, conditional logic, and loops with Bash, which makes it easier to pick up Python. Want to have a Linux development environment on your Windows device? Windows Subsystem for Linux (WSL) has you covered and Bash works there, too!While we won't cover absolutely everything there is to Bash, we do make sure to leave you with a solid foundation. At the end of this course, you'll be able to continue on your own following tutorials, docs, books, and other resources. If live is more your style, catch one of our How Linux Works and How to leverage it in the Cloud Series webinars. We'll cover a primer on How Linux Works, discuss How and when to use Linux on Azure, and get your developer environment set up with WSL.This Bash for Beginners series is part of a growing library of video series on the Microsoft Developer channel looking to quickly learn new skills including Python, Java, C#, Rust, JavaScript and more.Learn more about Bash in our Open Source communityNeed help with your learning journey?Watch Bash for Beginners Find Josh and myself on Twitter. Share your questions and progress on our Tech Community, we'll make sure to answer and cheer you on. The post Introducing Bash for Beginners appeared first on Microsoft Open Source Blog.


xi-editor retrospective
xi-editor retrospective

A bit more than four years ago I started the xi-editor project. Now I have placed it on the back burner (though there is still some activity from the open source community). The original goal was to deliver a very high quality editing experience. To this end, the project spent a rather large number of “novelty points” Rust as the implementation language for the core. A rope data structure for text storage. A multiprocess architecture, with front-end and plug-ins each with their own process. Fully embracing async design. CRDT as a mechanism for concurrent modification. I still believe it would be possible to build a high quality editor based on the original design. But I also believe that this would be quite a complex system, and require significantly more work than necessary. I’ve written the CRDT part of this retrospective already, as a comment in response to a Github issue. That prompted good discussion on Hacker News. In this post, I will touch again on CRDT but will focus on the other aspects of the system design. Origins The original motivation for xi came from working on the Android text stack, and confronting two problems in particular. One, text editing would become very slow as the text buffer got bigger. Two, there were a number of concurrency bugs in the interface between the EditText widget and the keyboard (input method editor). The culprit of the first problem turned out to be the SpanWatcher interface, combined with the fact that modern keyboards like to put a spelling correction span on each word. When you insert a character, all the successive spans bump their locations up by one, and then you have to send onSpanChanged for each of those spans to all the watchers. Combined with the fact that the spans data structure had a naive O(n) implementation, and the whole thing was quadratic or worse. The concurrency bugs boil down to synchronizing edits across two different processes, because the keyboard is a different process than the application hosting the EditText widget. Thus, when you send an update (to move the cursor, for example) and the text on the other side is changing concurrently, it’s ambiguous whether it refers to the old or new location. This was handled in an “almost correct” style, with timeouts for housekeeping updates to minimize the chance of a race. A nice manifestation of that is that swiping the cursor slowly through text containing complex emoji could cause flashes of the emoji breaking. These problems have a unifying thread in both cases there are small diffs to the text, but then the data structures and protocols handled these diffs in a less than optimal way, leading to both performance and correctness bugs. To a large extent, xi started as an exploration into the “right way” to handle text editing operations. In the case of the concurrency bugs, I was hoping to find a general, powerful technique to facilitate concurrent text editing in a distributed-ish system. While most of the Operational Transformation literature is focused on multiple users collaboratively editing a document, I was hoping that other text manipulations (like an application enforcing credit card formatting on a text input field) could fit into the general framework. That was also the time I was starting to get heavily into Rust, so it made natural sense to start prototyping a new green-field text editing engine. How would you “solve text” if you were free of backwards compatibility constraints (a huge problem in Android)? When I started, I knew that Operational Transformation was a solution for collaborative editing, but had a reputation for being complex and finicky. I had no idea how deep the rabbithole would be of OT and then CRDT. Much of that story is told in the CRDT discussion previously linked. The lure of modular software There is an extremely long history of people trying to build software as composable modules connected by some kind of inter-module communication fabric. Historical examples include DCE/RPC, Corba, Bonobo, and more recently things like Sandstorm and Fuchsia Modular. There are some partial successes, including Binder on Android, but this is still mostly an unrealized vision. (Regarding Binder, it evolved from a much more idealistic vision, and I strongly recommend reading this 2006 interview about OpenBinder). When I started xi, there were signs we were getting there. Microservices were becoming popular in the Internet world, and of course all Web apps have a client/server boundary. Within Google, gRPC was working fairly well, as was the internal process separation within Chrome. In Unix land, there’s a long history of the terminal itself presenting a GUI (if primitive, though gaining features such as color and mouse). There’s also the tradition of Blit and then, of course, NeWS and X11. I think one of the strongest positive models was the database / business logic split, which is arguably the most successful example of process separation. In this model, the database is responsible for performance and integrity, and the business logic is in a separate process, so it can safely do things like crash and hang. I very much thought of xi-core as a database-like engine, capable of handling concurrent text modification much like a database handles transactions. Building software in such a modular way requires two things first, infrastructure to support remote procedure calls (including serialization of the requests and data), and second, well-defined interfaces. Towards the end of 2017, I saw the goal of xi-editor as primarily being about defining the interfaces needed for large scale text editing, and that this work could endure over a long period of time even as details of the implementation changed. For the infrastructure, we chose JSON (about which more below) and hand-rolled our own xi-rpc layer (based on JSON-RPC). It turns out there are a lot of details to get right, including dealing with error conditions, negotiating when two ends of the protocol aren’t exactly on the same version, etc. One of the bolder design decisions in xi was to have a process separation between front-end and core. This was inspired in part by Neovim, in which everything is a plugin, even GUI. But the main motivation was to build GUI applications using Rust, even though at the time Rust was nowhere near capable of native GUI. The idea is that you use the best GUI technology of the platform, and communicate via async pipes. One argument for process separation is to improve overall system reliability. For example, Chrome has a process per tab, and if the process crashes, all you get is an “Aw, snap” without bringing the whole browser down. I think it’s worth asking the question is it useful to have the front-end continue after the core crashes, or the other way around? I think probably not; in the latter case it might be able to safely save the file, but you can also do that by frequently checkpointing. Looking back, I see much of the promise of modular software as addressing goals related to project management, not technical excellence. Ideally, once you’ve defined an inter-module architecture, then smaller teams can be responsible for their own module, and the cost of coordination goes down. I think this type of project management structure is especially appealing to large companies, who otherwise find it difficult to manage larger projects. And the tax of greater overall complexity is often manageable, as these big companies tend to have more resources. JSON The choice of JSON was controversial from the start. It did end up being a source of friction, but for surprising reasons. The original vision was to write plug-ins in any language, especially for things like language servers that would be best developed in the language of that ecosystem. This is the main reason I chose JSON, because I expected there would be high quality implementations in every viable language. Many people complained about the fact that JSON escapes strings, and suggested alternatives such as MessagePack. But I knew that the speed of raw JSON parsing was a solved problem, with a number of extremely high performance implementations (simdjson is a good example). Even so, aside from the general problems of modular software as described above, JSON was the source of two additional problems. For one, JSON in Swift is shockingly slow. There are discussions on improving it but it’s still a problem. This is surprising to me considering how important it is in many workloads, and the fact that it’s clearly possible to write a high performance JSON implementation. Second, on the Rust side, while serde is quite fast and very convenient (thanks to proc macros), when serializing a large number of complex structures, it bloats code size considerably. The xi core is 9.3 megabytes in a Linux release build (debug is an eye-watering 88MB), and a great deal of that bloat is serialization. There is work to reduce this, including miniserde and nanoserde, but serde is still by far the most mainstream. I believe it’s possible to do performant, clean JSON across most languages, but people should know, we’re not there yet. The rope There are only a few data structures suitable for representation of text in a text editor. I would enumerate them as contiguous string, gapped buffer, array of lines, piece table, and rope. I would consider the first unsuitable for the goals of xi-editor as it doesn’t scale well to large documents, though its simplicity is appealing, and memcpy is fast these days; if you know your document is always under a megabyte or so, it’s probably the best choice. Array of lines has performance failure modes, most notably very long lines. Similarly, many good editors have been written using piece tables, but I’m not a huge fan; performance is very good when first opening the file, but degrades over time. My favorite aspect of the rope as a data structure is its excellent worst-case performance. Basically, there aren’t any cases where it performs badly. And even the concern about excess copying because of its immutability might not be a real problem; Rust has a copy-on-write mechanism where you can mutate in-place when there’s only one reference to the data. The main argument against the rope is its complexity. I think this varies a lot by language; in C a gapped buffer might be preferable, but I think in Rust, a rope is the sweet spot. A large part of the reason is that in C, low level implementation details tend to leak through; you’ll often be dealing with a pointer to the buffer. For the common case of operations that don’t need to span the gap, you can hand out a pointer to a contiguous slice, and things just don’t get any simpler than that. Conversely, if any of the invariants of the rope are violated, the whole system will just fall apart. In Rust, though, things are different. Proper Rust style is for all access to the data structure to be mediated by a well-defined interface. Then the details about how that’s implemented are hidden from the user. A good way to think about this is that the implementation has complexity, but that complexity is contained. It doesn’t leak out. I think the rope in xi-editor meets that ideal. A lot of work went into getting it right, but now it works. Certain things, like navigating by line and counting UTF-16 code units, are easy and efficient. It’s built in layers, so could be used for other things including binary editing. One of the best things about the rope is that it can readily and safely be shared across threads. Ironically we didn’t end up making much use of that in xi-editor, as it was more common to share across processes, using sophisicated diff/delta and caching protocols. A rope is a fairly niche data structure. You really only want it when you’re dealing with large sequences, and also doing a lot of small edits on them. Those conditions rarely arise outside text editors. But for people building text editing in Rust, I think xi-rope holds up well and is one of the valuable artifacts to come from the project. There’s a good HN discussion of text editor data structures where I talk about the rope more, and can also point people to the Rope science series for more color. Async is a complexity multiplier We knew going in that async was going to be a source of complexity. The hope is that we would be able to tackle the async stuff once, and that the complexity would be encapsulated, much as it was for the rope data structure. The reality was that adding async made everything more complicated, in some cases considerably so. A particularly difficult example was dealing with word wrap. In particular, when the width of the viewport is tied to the window, then live-resizing the window causes text to rewrap continuously. With the process split between front-end and core, and an async protocol between them, all kinds of interesting things can go wrong, including races between editing actions and word wrap updates. More fundamentally, it is difficult to avoid tearing-style artifacts. One early relative success was implementing scrolling. The problem is that, as you scroll, the front-end needs to sometimes query the core to fetch visible text that’s outside its cache. We ended up building this, but it took months to get it right. By contrast, if we just had the text available as an in-process data structure for the UI to query, it would have been quite straightforward. I should note that async in interactive systems is more problematic than the tamer variety often seen in things like web servers. There, the semantics are generally the same as simple blocking threads, just with (hopefully) better performance. But in an interactive system, it’s generally possible to observe internal states. You have to display something, even when not all subqueries have completed. As a conclusion, while the process split with plug-ins is supportable (similar to the Language Server protocol), I now firmly believe that the process separation between front-end and core was not a good idea. Syntax highlighting Probably the high point of the project was the successful implementation of syntax highlighting, based on Tristan Hume’s syntect library, which was motivated by xi. There’s a lot more to say about this. First, TextMate / Sublime style syntax highlighting is not really all that great. It is quite slow, largely because it grinds through a lot of regular expressions with captures, and it is also not very precise. On the plus side, there is a large and well-curated open source collection of syntax definitions, and it’s definitely “good enough” for most use. Indeed, code that fools these syntax definitions (such as two open braces on the same line) is a good anti-pattern to avoid. It may be surprising just how much slower regex-based highlighting is than fast parsers. The library that xi uses, syntect, is probably the fastest open source implementation in existence (the one in Sublime is faster but not open source). Even so, it is approximately 2500 times slower for parsing Markdown than pulldown-cmark. And syntect doesn’t even parse setext-style lists correctly, because Sublime style syntax definitions have to work line-at-a-time, and the line of dashes following a heading isn’t available until the next line. These facts influenced the design of xi in two important ways. First, I took it as a technical challenge to provide a high-performance editing experience even on large files, overcoming the performance problems through async. Second, the limitations of the regex-based approach argued in favor of a modular plug-in architecture, so that as better highlighters were developed, they could be plugged in. I had some ambitions of creating a standard protocol that could be used by other editors, but this absolutely failed to materialize. For example, Atom instead developed tree-sitter. In any case, I dug in and did it. The resulting implementation is impressive in many ways. The syntax highlighter lives in a different process, with asynchronous updates so typing is never slowed down. It’s also incremental, so even if changes ripple through a large file, it updates what’s on the screen quickly. Some of the sophistication is described in Rope science 11. There was considerable complexity in the implementation. Text was synchronized between the main xi-core process and the plug-in, but for large files, the latter stores only a fixed-size cache; the cache protocol ended up being quite sophisticated. Updates were processed through a form of Operational Transformation, so if a highlighting result raced a text edit, it would never color an incorrect region (this is still very much a problem for language server annotations). As I said, syntax highlighting was something of a high point. The success suggested that a similar high-powered engineering approach could systematically work through the other problems. But this was not to be. As part of this work, I explored an alternative syntax highlighting engine based on parser combinators. If I had pursued that, the result would have been lightning fast, of comparable quality to the regex approach, and difficult to create syntax descriptions, as it involved a fair amount of manual factoring of parsing state. While the performance would have been nice to have, ultimately I don’t think there’s much niche for such a thing. If I were trying to create the best possible syntax highlighting experience today, I’d adapt Marijn Haverbeke’s Lezer. To a large extent, syntax highlighting is a much easier problem than many of the others we faced, largely because the annotations are a history-free function of the document’s plain text. The problem of determining indentation may seem similar, but is dependent on history. And it basically doesn’t fit nicely in the CRDT model at all, as that requires the ability to resolve arbitrarily divergent edits between the different processes (imagine that one goes offline for a bit, types a bit, then the language server comes back online and applies indentation). Another problem is that our plug-in interface had become overly specialized to solve the problems of syntax highlighting, and did not well support the other things we wanted to do. I think those problems could have been solved, but only with significant difficulty. There is no such thing as native GUI As mentioned above, a major motivation for the front-end / core process split was to support development of GUI apps using a polyglot approach, as Rust wasn’t a suitable language for building GUI. The theory was that you’d build the GUI using whatever libraries and language that was most suitable for the platform, basically the platform’s native GUI, then interact with the Rust engine using interprocess communication. The strongest argument for this is probably macOS, which at the time had Cocoa as basically the blessed way to build GUI. Most other platforms have some patchwork of tools. Windows is particularly bad in this respect, as there’s old-school (GDI+ based) win32, WinForms, WPF, Xamarin, and most recently WinUI, which nobody wants to use because it’s Windows 10 only. Since xi began, macOS is now catching up in the number of official frameworks, with Catalyst and SwiftUI added to the roster. Outside the realm of official Apple projects, lots of stuff is shipping in Electron these days, and there are other choices including Qt, Flutter, Sciter, etc. When doing some performance work on xi, I found to my great disappointment that performance of these so-called “native” UI toolkits was often pretty poor, even for what you’d think of as the relatively simple task of displaying a screenful of text. A large part of the problem is that these toolkits were generally made at a time when software rendering was a reasonable approach to getting pixels on screen. These days, I consider GPU acceleration to be essentially required for good GUI performance. There’s a whole other blog post in the queue about how some toolkits try to work around these performance limitations by leveraging the compositor more, but that has its own set of drawbacks, often including somewhat ridiculous RAM usage for all the intermediate textures. I implemented an OpenGL-based text renderer for xi-mac, and did similar explorations on Windows, but this approach gives up a lot of the benefits of using the native features (as a consequence, emoji didn’t render correctly). Basically, I discovered that there is a pretty big opportunity to build UI that doesn’t suck. Perhaps the most interesting exploration was on Windows, the xi-win project. Originally I was expecting to build the front-end in C# using one of the more mainstream stacks, but I also wanted to explore the possibility of using lower-level platform capabilities and programming the UI in Rust. Early indications were positive, and this project gradually morphed into Druid, a native Rust GUI toolkit which I consider very promising. If I had said that I would be building a GUI toolkit from scratch as part of this work when I set out, people would have rightly ridiculed the scope as far too ambitious. But that is how things are turning out. Fuchsia An important part of the history of the project is its home in Fuchsia for a couple years. I was fortunate that the team was willing to invest in the xi vision, including funding Colin’s work and letting me host Tristan to build multi-device collaborative editing as an intern project. In many ways the goals and visions aligned, and the demo of that was impressive. Ultimately, though, Fuchsia was not at the time (and still isn’t) ready to support the kind of experience that xi was shooting for. Part of the motivation was also to develop a better IME protocol, and that made some progress (continued by Robert Lord, and you can read about some of what we discovered in Text Editing Hates You Too). It’s sad this didn’t work out better, but such is life. A low point My emotional tone over the length of the project went up and down, with the initial enthusiasm, stretches of slow going, a renewed excitement over getting the syntax highlighting done, and some other low points. One of those was learning about the xray project. I probably shouldn’t have taken this personally, as it is very common in open source for people to spin up new projects for a variety of reasons, not least of which is that it’s fun to do things yourself, and often you learn a lot. Even so, xray was a bit of a wake-up call for me. It was evidence that the vision I had set out for xi was not quite compelling enough that people would want to join forces. Obviously, the design of xray had a huge amount of overlap with xi (including the choice of Rust and decision to use a CRDT), but there were other significant differences, particularly the choice to use Web technology for the UI so it would be cross-platform (the fragmented state of xi front-ends, especially the lack of a viable Windows port, was definitely a problem). I’m putting this here because often, how you feel about a project is just as important, even more so, than technical aspects. I now try to listen more deeply to those emotional signals, especially valid criticisms. Community Part of the goal of the project was to develop a good open-source community. We did pretty well, but looking back, there are some things we could have done better. A lot of the friction was simply the architectural burden described above. But in general I think the main thing we could have done better is giving contributors more agency. If you have an idea for a feature or other improvement, you should be able to come to the project and do it. The main role of the maintainers should be to help you do that. In xi, far too often things were blocking on some major architectural re-work (we have to redo the plug-in API before you can implement that feature). One of the big risks in a modular architecture is that it is often expedient to implement things in one module when to do things “right” might require it in a different place, or, even worse, require changes in inter-module interfaces. We had these decisions a lot, and often as maintainers we were in a gate-keeping role. One of the worst examples of this was vi keybindings, for which there was a great deal of community interest, and even a project done off to the side to try to achieve it, but never merged into the main project. So I think monolithic architectures, perhaps ironically, are better for community. Everybody takes some responsibility for the quality of the whole. In 2017 we hosted three Google Summer of Code Students Anna Scholtz, Dzung Lê, and Pranjal Paliwal. This worked out well, and I think GSoC is a great resource. I have been fortunate for almost the entire time to have Colin Rofls taking on most of the front-line community interaction. To the extent that xi has been a good community, much of the credit is due him. One of the things we have done very right is setting up a Zulip instance. It’s open to all with a Github account, but we have had virtually no difficulty with moderation issues. We try to maintain positive interactions around all things, and lead by example. This continues as we pivot to other things, and may be one of the more valuable spin-offs of the project. Conclusion The xi-editor project had very ambitious goals, and bet on a number of speculative research subprojects. Some of those paid off, others didn’t. One thing I would do differently is more clearly identify which parts are research and which parts are reasonably straightforward implementations of known patterns. I try to do that more explicitly today. To a large extent the project was optimized for learning rather than shipping, and through that lens it has been pretty successful. I now know a lot more than I did about building editor-like GUI applications in Rust, and am now applying that to making the Druid toolkit and the Runebender font editor. Perhaps more important, because these projects are more ambitious than one person could really take on, the community started around xi-editor is evolving into one that can sustain GUI in Rust. I’m excited to see what we can do. Discuss on Hacker News and /r/rust.


CSS not rendering in Ruby on Rails [Development No ...
Category: Front-End

When Ruby on Rails does not render CSS and Images look disproportionate on Linux Ubuntu Destro (W ...


Views: 365 Likes: 118
[Resolved] Dot Net Core Error "Project file is inc ...
Category: .Net 7

Sometimes when working with .Net Core 2.2 project, you might end up with this error Project file ...


Views: 2649 Likes: 129
Machine Learning Sources/Scholar.google.com
Category: Machine-Learning

We all know that Machine Learning is a hot topic ...


Views: 278 Likes: 84
Neuralink Announces FDA Approval of In-Human Clinical Study
Neuralink Announces FDA Approval of In-Human Clini ...

Neuralink, a neurotech startup co-founded by Elon Musk, has received FDA approval for its first in-human clinical study to test its brain implant called the Link. The implant aims to help patients with severe paralysis regain the ability to control external technologies using neural signals, potentially allowing them to communicate through mind-controlled cursors and typing. CNBC reports "This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people," the company wrote in a tweet. The FDA and Neuralink did not immediately respond to CNBC's request for comment. The extent of the approved trial is not known. Neuralink said in a tweet that patient recruitment for its clinical trial is not open yet. No [brain-computer interface, or BCI] company has managed to clinch the FDA's final seal of approval. But by receiving the go-ahead for a study with human patients, Neuralink is one step closer to market. Neuralink's BCI will require patients to undergo invasive brain surgery. Its system centers around the Link, a small circular implant that processes and translates neural signals. The Link is connected to a series of thin, flexible threads inserted directly into the brain tissue where they detect neural signals. Patients with Neuralink devices will learn to control it using the Neuralink app. Patients will then be able to control external mice and keyboards through a Bluetooth connection, according to the company's website. Read more of this story at Slashdot.


Solved!! Docker-Compose SQL Server database persis ...
Category: Docker

Problem There is nothing like losing data in the SQL Server Docker Container af ...


Views: 741 Likes: 101
What is New in C-Sharp 9 ( Developing Notes )
Category: C-Sharp

1. Instead of writing public class person, you now write public record p ...


Views: 260 Likes: 92
Why is AspNet 6 Application always running in Prod ...
Category: .Net 7

Question I just upgraded Asp.Net 5 Application to Asp.Net 6, when I run the command&nbsp;< ...


Views: 0 Likes: 38
MSBUILD : error MSB1009: Project file does not exi ...
Category: .Net 7

Error building Dot Net Core 2.2 Docker Image from a Docker File <span style="background-c ...


Views: 8821 Likes: 166
Nginx.service: Control process exited, code=exited ...
Category: Server

Question How do you start Nginx when it fails to start with an error N ...


Views: 283 Likes: 91

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]