Settings Results in 4 milliseconds

Chess, a Drosophila of reasoning
Chess, a Drosophila of reasoning

![Figure][1] CREDIT IGOR KHODZINSKIY The recent world chess championship saw Magnus Carlsen defend his title against Fabiano Caruana. But it was not a contest between the two strongest chess players on the planet, only the strongest humans. Soon after I lost my rematch against IBM's Deep Blue in 1997, the short window of human-machine chess competition slammed shut forever. Unlike humans, machines keep getting faster, and today a smartphone chess app can be stronger than Deep Blue. But as we see with the AlphaZero system (see pages [1118][2] and [1140][3]), machine dominance has not ended the historical role of chess as a laboratory of cognition. ![Figure][1] CREDIT DEEPMIND TECHNOLOGIES LIMITED Much as the Drosophila melanogaster fruit fly became a model organism for geneticists, chess became a Drosophila of reasoning. In the late 19th century, Alfred Binet hoped that understanding why certain people excelled at chess would unlock secrets of human thought. Sixty years later, Alan Turing wondered if a chess-playing machine might illuminate, in the words of Norbert Wiener, “whether this sort of ability represents an essential difference between the potentialities of the machine and the mind.” Much as airplanes don't flap their wings like birds, machines don't generate chess moves like humans do. Early programs that attempted it were weak. Success came with the “minimax” algorithm and Moore's law, not with the ineffable human combination of pattern recognition and visualization. This prosaic formula dismayed the artificial intelligence (AI) crowd, who realized that profound computational insights were not required to produce a machine capable of defeating the world champion. But now the chess fruit fly is back under the microscope. Based on a generic game-playing algorithm, AlphaZero incorporates deep learning and other AI techniques like Monte Carlo tree search to play against itself to generate its own chess knowledge. Unlike top traditional programs like Stockfish and Fritz, which employ many preset evaluation functions as well as massive libraries of opening and endgame moves, AlphaZero starts out knowing only the rules of chess, with no embedded human strategies. In just a few hours, it plays more games against itself than have been recorded in human chess history. It teaches itself the best way to play, reevaluating such fundamental concepts as the relative values of the pieces. It quickly becomes strong enough to defeat the best chess-playing entities in the world, winning 28, drawing 72, and losing none in a victory over Stockfish. I admit that I was pleased to see that AlphaZero had a dynamic, open style like my own. The conventional wisdom was that machines would approach perfection with endless dry maneuvering, usually leading to drawn games. But in my observation, AlphaZero prioritizes piece activity over material, preferring positions that to my eye looked risky and aggressive. Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth. This superior understanding allowed it to outclass the world's top traditional program despite calculating far fewer positions per second. It's the embodiment of the cliché, “work smarter, not harder.” AlphaZero shows us that machines can be the experts, not merely expert tools. Explainability is still an issue—it's not going to put chess coaches out of business just yet. But the knowledge it generates is information we can all learn from. Alpha-Zero is surpassing us in a profound and useful way, a model that may be duplicated on any other task or field where virtual knowledge can be generated. Machine learning systems aren't perfect, even at a closed system like chess. There will be cases where an AI will fail to detect exceptions to their rules. Therefore, we must work together, to combine our strengths. I know better than most people what it's like to compete against a machine. Instead of raging against them, it's better if we're all on the same side. [1] pendingyes [2] http//www.sciencemag.org/content/362/6419/1118 [3] http//www.sciencemag.org/content/362/6419/1140


Embracing ?????: Programming as Imitation of the Divine
Embracing ????? Programming as Imitation of the D ...

Within the field of software development, we are prone to gazing upon the future – new libraries, new tools. But from where did we come? The philosophical foundation of the field is largely absent from the contemporary zeitgeist, but our work is deeply rooted in the philosophical traditions of not only Logic, but Ontology, Identity, Ethics and so on. Daily, the programmer struggles with not only their implementation of logic but the ontological and identity questions of classifying and organizing their reality into a logical program. What is a User? What are its properties? What actions can be taken on it? “Oh the mundanity!” – cries the programmer. But in-deed, as we will explore here – you are doing God’s work! Because the work of programmers is not too dissimilar from that of philosophers throughout history, we can look to them for guidance on the larger questions of our own tradition. In this piece, we will focus mainly on the ancient Greeks and their metaphysical works. Guided by their knowledge, we can better incorporate Reason and Logic into our programs and strive to escape Plato’s Cave (https//en.wikipedia.org/wiki/Allegory_of_the_cave). Furthermore, because the results of our work is our reason manifested into reality, we must suffer under the greater burden of responsibility to aim towards the divine Reason. ????? [T]he spermatikos logos in each man provides a common, non-confessional basis in each man, whether as a natural or supernatural gift from God (or both), by which he is called to participate in God’s Reason or [?????], from which he obtains a dignity over the brute creation, and out of which he discovers and obtains normative judgments of right and wrong (https//lexchristianorum.blogspot.com/2010/03/st-justin-martyr-spermatikos-logos-and.html) The English word logic is rooted in the Ancient Greek ????? (Logos) – meaning “word, discourse or reason”. ????? is related to the Ancient Greek ???? (légo) – meaning “I say”, a cognate with the Latin legus or “law”. Going even further back, ????? derives from the PIE root *le?- which can have the meanings “I put in order, arrange, gather, I choose, count, reckon, I say, speak”. (https//en.wikipedia.org/wiki/Logos) The concept of the ????? has been studied and applied philosophically throughout history – going back to Heraclitus around 500 BC. Heraclitus described the ????? as the common Reason of the world and urged people to strive to know and follow it. “For this reason it is necessary to follow what is common. But although the ????? is common, most people live as if they had their own private understanding.” (Diels–Kranz, 22B2) With Aristotelian, Platonic and early Stoic thought, the ????? as universal and objective Reason and Logic was further considered and defined. ????? was seen by the Stoics as an active, material phenomenon driving nature and animating the universe. The ????? spe?µat???? (“logos spermatikos”) was, according to the Stoics, the principle, generative Reason acting in inanimate matter in the universe. Plutarch, a Platonist, wrote that the ????? was the “go-between” between God and humanity. The Stoics believed that humans each possess a part of the divine ?????. The ????? was also a fundamental philosophical foundation for early Christian thought (see John 11-3). The ????? is impossible to concisely summarize. But for the purpose of this piece, we can consider it the metaphysical (real but immaterial) universal Reason; an infinite source of Logic and Truth into which humans tap when they reason about the world. Imitation of the Divine In so far as the spirit is also a kind of ‘window on eternity’… it conveys to the soul a certain influx divinus… and the knowledge of a higher system of the world (Jung, Carl. Mysterium Coniunctionis) What is “imitation of the divine”? One could certainly begin by considering what the alternative would be. A historical current has existed in the philosophical tradition of humanity’s opportunity and responsibility to turn to and harness the divine ????? in their daily waking life. With language and thought we reason about the material and immaterial. As Rayside and Campbell declared in their defense of traditional logic in the field of Computer Science – “But if what is real and unchanging (the intelligible structure in things) is the measure of what we think about it (concept) and speak (word) about it, then it too is a work of reason not our reason, for our reason is the measured, but of Reason.” (Rayside, D, and G Campbell. Aristotle and Object-Oriented Programming Why Modern Students Need Traditional Logic. https//dl.acm.org/doi/pdf/10.1145/331795.331862.) Plato, in his theory of the tripartite soul, understood that the ideal human would not suffer passions (??µ?e?d??, literally “anger-kind”) or desires (?p???µ?t????) but be led by the ????? innate in the soul (????st????). When human reasoning is concordant with Reason, for a moment, Man transcends material reality and is assimilated with the divine (the ?????). “Hence, so many of the great thinkers who have gone before us posited that the natural way in which the human mind gets to God is in a mediated way — via things themselves, which express God to the extent that they can.” (Rayside, Campbell) God here is the representative of the ????? – humanity can achieve transcendental knowledge by consideration (in the deepest sense of the word) of the things around them. The Programmer Assimilated It is simply foolish to pretend that human reason is not concerned with meaning, or that programming is not an application of human reason (Rayside, Campbell) The programmer must begin by defining things – material or conceptual. “We are unable to reason or communicate effectively if we do not first make the effort to know what each thing is.” (Rayside, Campbell) By considering the ontological questions of the things in our world, in order to represent them accurately (and therefore ethically) in our programs, the programmer enters into the philosophical praxis. Next, the programmer adds layers of identity and logic on top of their ontological discovery, continuing in the praxis. But the programmer takes it a step further – the outcome of their investigation is not only their immaterial thought but, in executing the program, the manifestation of their philosophical endeavor into material reality. The program choreographs trillions of elementary charges through a crystalline maze, harnessing the virtually infinite charge of the Earth, incinerating the remains of starlight-fueled ancient beings in order to realize the reasoning of its programmer. Here the affair enters into the realm of Ethics. “The programmer is attempting to solve a practical problem by instructing a computer to act in a particular fashion. This requires moving from the indicative to the imperative from can or may to should. For a philosopher in the tradition, this move from the indicative to the imperative is the domain of moral science.” (Rayside, Campbell) Any actions taken by the program are the direct ethical responsibility of the programmer. Furthermore, the programmer, as the source of reason and will driving a program, manifesting it into existence, becomes in that instant the ????? spe?µat???? (“logos spermatikos”) incarnate. The programmer’s reason, tapped into the divine Reason (?????), is generated into existence in the Universe and commands reasonable actions of inanimate matter. Feeble Earthworm What sort of freak then is man? How novel, how monstrous, how chaotic, how paradoxical, how prodigious! Judge of all things, feeble earthworm, repository of truth, sink of doubt and error, glory and refuse of the universe! (Pascal, B. (1670). Pensées.) Pascal would be even more perplexed by the paradox of the programmer – in search of Logic and simultaneously materializing their logic; their “repository of truth” a hand emerging from the dirt reaching towards the ?????. Programmers are equals among the feeble earthworms crawling out of Plato’s cave. We enjoy no extraordinary access to Reason and yet bear a greater responsibility as commanders of this technical revolution in which we find ourselves. While the Greeks had an understanding of the weight of their work, their impact was restricted to words. The programmer’s work is a true hypostatization or materialization of the programmer’s reason. As programmers – as beings of Reason at the terminal of this grand system – we should most assuredly concern ourselves with embracing and modeling ourselves and our work after the divine and eternal ?????. The post Embracing ????? Programming as Imitation of the Divine appeared first on Simple Thread.


Books for Programmers Manning.com
Category: Technology

Books for High-End Software DevelopersEarly in November 2018, I spoke with a ver ...


Views: 290 Likes: 107
What is Computer Programming
Category: Computer Programming

<div class="group w-full text-gray-800 darktext-gray-100 border-b border-black/10 darkborder-gray- ...


Views: 0 Likes: 17
How to Research different technologies for Develop ...
Category: Computer Programming

This is often overlooked as many Computer Pr ...


Views: 0 Likes: 38
Know your end user (Market Segmentation)
Category: Computer Programming

There is a famous saying that goes, &ldquo;If you ...


Views: 0 Likes: 22
 Dev Discussions - Phillip Carter
Dev Discussions - Phillip Carter

This is the full interview from my discussion with Phillip Carter in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!Last week, we talked to Isaac Abraham about F# from a C# developer’s perspective. This week, I’m excited to get more F# perspectives from Phillip Carter. Phillip is a busy guy at Microsoft but a big part of his role as a Program Manager is overseeing F# and its tooling.In this interview, we talk to Phillip about Microsoft F# support, F# tooling (and how it might compare to C#), good advice for learning F#, and more.Can you talk about what you’ve done at Microsoft, and how you landed on F#?I spent some time bouncing around on various projects related to shipping .NET Core 1.0 for my first year-ish at Microsoft. A lot of it was me doing very little in the early days, since there was little for an entry-level PM to do. But I did find that that the Docs team needed help and so I ended up writing a good portion of the .NET docs that exist on docs.microsoft.com today. Some of the information architecture I contributed to is still present there today.I got the F# gig because I had an interest in F# and the current PM was leaving for a different team. Rather than let it sit without a PM for an indeterminate amount of time, everyone agreed that I should take the position. Always better to have someone who’s interested in the space assume some responsibility than have nobody do it, right? I’ve been working on F# at Microsoft ever since.Do you feel F# gets the recognition and attention at Microsoft it deserves?This is always a fun question.Many F# programmers would emphatically proclaim, “No!” and it’s of course a meme that Microsoft Doesn’t Care About F# or whatever. But the reality is that like all other functional programming languages, F# is a niche in terms of adoption, and it is likely to stay a niche if you compare it to the likes of C#, Java, C++, Python, and JavaScript.As a fan of programming languages in general, I feel like tech companies (like Microsoft) emphasize platforms far more than any given language. I find it kind of funny, because I’ve already seen multiple platforms come and go in our industry—but all the languages I’ve been involved with have only become more permanent and grown during that same timespan. This is kind of sidestepping the question, but it really is how I feel about the topic.­­­­­­Being a niche language doesn’t mean something isn’t valuable. To the contrary, there are many large, paying customers who use F# today, and that is only expected to grow as more organizations incorporate and grow software systems and hiring people who like functional programming.  For example, F# powered Azure’s first billion-dollar startup (Jet.com) back in 2016. Could they have used C#? Sure. But they didn’t. Did F# cause them to use Azure? Maybe. They evaluated Azure and Google Cloud and decided on Azure for a variety of reasons, technological compatibility perhaps being one of them. But these questions don’t really matter.From Microsoft’s perspective, we want customers to use the tech they prefer, not the tech we prefer, and be successful with it. If that’s F#, then we want to make sure that they can use our developer tools and platforms and have a good time doing it. If they can’t, then we want to do work to fix it. If you look at official statements on things like our languages, we’re fairly unopinionated and encourage people to try the language and environment that interests them the most and works the best for their scenario.All this context matters when answering this question. Yes, I think Microsoft gives F# roughly the attention and love it deserves, certainly from an engineering standpoint. I don’t think any other major company would do something like pay an employee to fly out to a customer’s office, collect problems they are having with tooling for a niche programming language, and then have a team refocus their priorities to fix a good set of those issues all in time for a major release (in case it wasn’t clear, I am speaking from experience). From the customer standpoint, this is the kind of support that they would expect.From a “marketing” standpoint, I think more attention should be paid to programming languages in general, and that F# should be emphasized more proportionally. But the reality is that there are a lot more platforms than there are languages, so I don’t see much of a change in our industry. I do think that things have improved a lot for F# on this front in the past few years, and I’ve found that for non-engineering tasks I’ve had to work less to get F# included in something over time.This year alone, we’ve had four blog posts about F# updates for F# 5, with a few more coming along. And of course F# also has dedicated pages on our own .NET website, dedicated tutorials for newcomers, and a vast library of documentation that’s a part of the .NET docs site. But if people are waiting for Microsoft’s CEO to get on stage, proclaim that OOP is dead and we all need to do FP with F#, they shouldn’t be holding their breath.This also speaks to a broader point about F# and some Microsoft tech. One of the reasons why we pushed everything into the open and worked to ensure that cross-platform worked well was because we wanted to shift the perception of our languages and tech stacks.People shouldn’t feel like they need Microsoft to officially tell them to use something. They should feel empowered to investigate something like F# in the context of their own work, determine its feasibility for themselves, and present it to their peers.I think it’s Microsoft’s responsibility to ensure that potential adopters are armed with the correct information and have a strong understanding that F# and .NET are supported products. And it’s also Microsoft’s responsibility to communicate updates and ensure that F# gets to “ride the wave” of various marketing events for .NET. But I really, truly want people to feel like they don’t need Microsoft for them to be successful with using and evangelizing F#. I think it’s critical that the power dynamic when concerning F# and .NET usage in any context balances out more between Microsoft and our user base. This isn’t something that can come for free, and does require active participation of people like me in communities rather than taking some lame ivory tower approach.From the build system to the tooling, is F# a first-class citizen in Visual Studio and other tools like C# is? If I’m a C# dev coming over, would I be surprised about things I am used to?This is a good question, and the answer to this is it depends on what you do in Visual Studio. All developers are different, but I have noticed a stark contrast among the C# crowd those who use visual designer tooling and those who do not.For those who use visual designer tooling heavily, F# may not be to your liking. C# and VB are the only two Visual Studio languages that have extensive visual designer tooling support, and if you rely on or prefer these tools, then you’ll find F# to be lacking. F# has an abundance of IDE tooling for editing code and managing your codebase, but it does not plug into things like the EF6 designer, Code Map, WinForms designer, and so on.For anyone who uses Visual Studio to primarily edit code, then F# may take some getting used to, but most of the things you’re used to using are present. In that sense, it is first class. Project integration, semantic colors, IntelliSense, tooltips (more advanced than those in C#), some refactorings and analyzers, and so on.C# has objectively more IDE features than F#, and the number of refactorings available in F# tooling absolutely pales in comparison to C# tooling. Some of these come down to how each language works, though, so it’s not quite so simple as “F# could just have XYZ features that C# has.” But overall, I think people tend to feel mostly satisfied by the kinds of tooling available to them.It’s often claimed that F# needs less refactoring tools because the language tends to guide programmers into one way to do things, and the combination of the F# type system and language design lends itself towards the idiom, “if it compiles, it works right.” This is mostly true, though I do feel like there are entire classes of refactoring tools that F# developers would love to use, and they’d be unique to F# and functional programming.What’s the biggest hurdle you see for people trying to learn F#, especially from OO languages like C# or Java?I think that OO programming in mainstream OO languages tends to over-emphasize ceremony and lead to overcomplicated programs. A lot of people normalize this and then struggle to work with something that is significantly simpler and has less moving parts.When you expect something to be complicated and it’s simple, this simplicity almost feels like it’s mocking you, like the language and environment is somehow smarter than you. That’s certainly what I felt when I learned F# and Haskell for the first time.Beyond this, I think that the biggest remaining hurdles simply are the lack of curly braces and immutability. It’s important to recall that for many people, programming languages are strongly associated with curly braces and they can struggle to accept that a general-purpose programming language can work well without them.C, C++, Java, C#, and JavaScript are the most widely used languages and they all come from the same family of programming languages. Diverging greatly from that syntax is a big deal and shouldn’t be underestimated. This syntax hurdle is less of a concern for Python programmers, and I’ve found that folks with Python experience are usually warmer to the idea of F# being whitespace sensitive. Go figure!The immutability hurdle is a big one for everyone, though. Most people are trained to do “place-oriented programming”—put a value in a variable, put that variable in a list of variables, change the value in the variable, change the list of variables, and so on. Shifting the way you think about program flow in terms of immutability is a challenge that some people never overcome, or they prefer not to overcome because they hate the idea of it. It really does fundamentally alter how you write a program, and if you have a decade or more of training with one way to write programs, immutability can be a big challenge.As a C# developer, in your opinion what’s the best way to learn F#?I think the best way is to start with a console app and work through a problem that requires the use of F# types—namely records and unions—and processing data that is modeled by them.A parser for a Domain-Specific Language (DSL) is a good choice, but a text-based game could also work well. From there, graduating to a web API or web app is a good idea. The SAFE stack combines F# on the backend with F# on the frontend via Fable—a wonderful F# to JavaScript compiler—to let you build an app in F# in multiple contexts. WebSharper also allows you to accomplish this in a more opinionated way (it’s a supported product, too). Finally, Bolero is a newer tech that lets you build WebAssembly apps using some of the more infrastructural Blazor components. Some rough edges, but since WebAssembly has the hype train going for it, it’s not a bad idea to check it out.Although this wasn’t your question, I think a wonderful way to learn F# is to do some basic data analysis work with F# in Jupyter Notebooks or just an F# script with F# Interactive. This is especially true for Python folks who work in more analytical spaces, but I think it can apply to any C# programmer looking to develop a better understanding of how to do data science—the caveat being that most C# programmers don’t use Jupyter, so there would really be two new things to learn.What are you most excited about with F# 5?Firstly, I’m most excited about shipping F# 5. It’s got a lot of features that people have been wanting for a long time, and we’ve been chipping away at it for nearly a year now. Getting this release out the door is something I’m eagerly awaiting.If I had to pick one feature I like the most, it’s the ability to reference packages in F# scripts. I do a lot of F# scripting, and I use the mechanism in Jupyter Notebooks all the time, too. It just kind of feels like magic that I can simply state a package I want to use, and just use it. No caveats, no strings attached. In Python, you need to acquire packages in an unintuitive way due to weirdness with how notebooks and your shell process and your machine’s python installation work. It’s complexity that simply doesn’t exist in the F# world and I absolutely love it.What’s on the roadmap past F# 5? Any cool features in the next few releases?Currently we don’t have much of a roadmap set up for what comes next. There are still a few in-progress features, two of them being rather large a task { } computation expression in FSharp.Core that rewrites into an optimized state machine, and a revamp of the F# constraint system to allow specifying static constraints in type extensions.The first one is a big deal for anything doing high-performance work on .NET. The second one is a big deal for lots of general F# programming scenarios, especially if you’re in the numerical space and you want to define a specialized arithmetic for types that you’re importing from somewhere else. The second one will also likely fix several annoying bugs related to the F# constraint system and generally make library authors who use that system heavily much happier.Beyond this, we’ll have to see what comes up during planning. We’re currently also revamping the F# testing system in our repository to make it easier for contributors to add tests and generally modernize the system that tens of thousands of tests use to execute today. We’re also likely to start some investigative work to rebase Visual Studio F# tooling on LSP and working with the F# community to use a single component in both VS and VSCode. They already have a great LSP implementation, but a big merging of codebases and features needs to happen there. It’ll be really exciting, but we’re not quite sure what the work “looks like” yet.What’s something about working on the F# team that you wish the community knew, but probably doesn’t?I think a lot of folks underestimate just how much work goes into adding a new language feature. Let’s consider something like nameof, which was requested a long time ago. Firstly, there needed to be a design for the obvious behaviors. But then there are non-obvious ones, like nameof when used as a pattern, or what the result of taking the name of an operator should be (there are two kinds of names for an operator in F#).Language features need a high degree of orthogonality—they should work well with every other feature. And if they don’t, there needs to be a very good reason.Firstly, that means a very large test matrix that takes a long time to write, but you also run into quirks that you wouldn’t initially anticipate. For example, F# has functions like typeof and typedefof that accept a type as a parameterized type argument, not an input to a function. Should nameof behave like that when taking the name of a type parameter? That means there are now two syntax forms, not just one. Is that the right call? We thought so, but it took a few months to arrive at that decision.Another quirk in how it differs from C# is that you can’t take a fully-qualified name to an instance member as if it were static.  Why not? Because nameof would be the only feature in all of F# that allows that kind of qualification. Special cases like this might seem fine in isolation, but if you have every language feature deciding to do things its own way rather than consider how similar behaviors work in the language, then you end up with a giant bag of parlor tricks with no ability to anticipate how you can or cannot use something.Then there are tooling considerations does it color correctly in all ways you’d use it? If I have a type and a symbol with the same name and I use nameof on it, what does the editor highlight? What name is taken? What is renamed when I invoke the rename refactoring? Does the feature correctly deactivate if I explicitly set my LangVersion to be lower than F# 5? And so on.These things can be frustrating for people because they may try a preview, see that a feature works great for them now, and wonder why we haven’t just shipped it yet. Additionally, if it’s a feature that was requested a long time ago, there seems to be some assumption that because a feature was requested a long time ago, it should be “further along” in terms of design and implementation. I’m not sure where these kinds of things come from, but the reason why things take long is because they are extremely difficult and require a lot of focused time to hammer out all the edge cases and orthogonality considerations.Can you talk a little about the SAFE Stack and how it can be used in ASP.NET Core applications?The SAFE stack is a great combination of F# technologies—minus the A for Azure or AWS, I guess—to do full-stack F# programming. It wasn’t the first to achieve this—I believe WebSharper was offering similar benefits many years ago—but by being a composition of various open source tools, it’s unique.The S and A letters mostly come into play with ASP.NET Core. The S stands for Saturn, which is an opinionated web framework that uses ASP.NET Core under the hood. It calls into a more low-level library called Giraffe, and if you want to use Giraffe instead (GAFE), you can. The A is more of a stand in for any cloud that can run ASP.NET Core, or I guess it could just mean A Web Server or something. It’s where ASP.NET Core runs under the hood here. The F and E are for Fable and Elmish, which are technologies for building web UIs with F#.I won’t get into the details of how to use SAFE, but I will say that what I love about it is that all the technologies involved are entirely independent of one another and square F# technologies. Yes, they use .NET components to varying degrees and rely on broader ecosystems to supply some core functionality. But they are technologies are made by and for F# developers first.This is a level of independence for the language that I think is crucial for the long-term success of F#. People can feel empowered to build great tech intended mainly for F# programmers, combine that tech with other tech, and have a nice big party in the cloud somewhere. What SAFE represents to me is more important than any of the individual pieces of tech it uses.We’re seeing a lot of F# inspiration lately in C#, especially with what’s new in C# 9 (with immutability, records, for example). Where do you think the dividing line is between C# with FP and using F#? Is there guidance to help me make that decision?I think the dividing line comes down to two things what your defaults are and what you emphasize.C# is getting a degree of immutability with records. But normal C# programming in any context is mutable by default. You can do immutable programming in C# today, and C# records will help with that. But it’s still a bit of a chore because the rest of the language is just begging you to mutate some variables.They’re called variables for a reason! This isn’t a value judgement, though. It’s just what the defaults are. C# is mutable by default, with an increasing set of tools to do some immutability. F# is immutable by default, and it has some tools for doing mutable programming.I think the second point is more nuanced, but also more important. Both C# and F# implement the .NET object system. Both can do inheritance, use accessibility modifiers on classes, and do fancy things with interfaces (including interfaces with default implementations). But how many F# programmers use this side of the language as a part of their normal programming? Not that many. OOP is possible in F#, but it’s just not emphasized. F# is squarely about functional programming first, with an object programming system at your disposal when you need it.On the other hand, C# is evolving into a more unopinionated language that lets you do just about anything in any way you like. The reality is that some things work better than others (recall that C# is not immutable by default), but this lack of emphasis on one paradigm over the other can lead to vastly different codebases despite being written in the same language.Is that okay? I can’t tell. But I think it makes identifying the answer to the question, “how should I do this?” more challenging. If you wanted to do functional programming, you could use C# and be fine. But by using a language that is fairly unopinionated about how you use it, you may find that it’s harder to “get it” when thinking functionally than if you were to use F#. Some of the principles of typed functional programming may feel more difficult or awkward because C# wasn’t built with them in at first. Not necessarily a blocker, but it still matters.What I would say is that if you want to do functional programming, you will only help yourself by learning F# when learning FP. It’s made for doing functional programming on .NET first and foremost, and as a general guideline it’s a good idea to use tools that were made for a specific purpose if you are aligned with that purpose. You may find that you don’t like it, or that you thought some things were cool but you’re ultimately happier with taking what you learned and writing C# code in a more functional style from now on. That’s great, and you shouldn’t feel any shame in deciding that F# isn’t for you. But it’ll certainly making writing functional C# easier, since you’ll already have a good idea of how to generally approach things.Something I ask everyone what is your one piece of programming advice?The biggest piece of advice I would give people is to look up the different programming paradigms, functional being one of them, and try out a language some of them.Most programmers are used to an ALGOL-derived language, and although they are great languages, they all tend to encourage the same kind of thought process for how you write programs. Programming can be a tool for thought, and languages from different backgrounds encourage different ways of thinking about solving problems with programming languages. I believe that this can make people better programmers even if they never use F# or other languages outside of the mainstream ones.Additionally, all languages do borrow from other ones to a degree, and understanding different languages can help you master new things coming into mainstream languages.You can reach Phillip Carter on Twitter.


OCR and PDF Data Extraction in Microsoft Power Automate
OCR and PDF Data Extraction in Microsoft Power Aut ...

Introduction to Microsoft Power AutomatePower Automate, formerly known as Microsoft Flow, is a cloud-based service offered by Microsoft to help users create and automate workflows across multiple applications and services. Its aim is to boost user productivity in business processes and automate repetitive manual tasks.Using Power Automate, you can design workflows that connect to over 300 services, such as SharePoint, Outlook, Excel, OneDrive, Dynamics 365, and third-party applications like Twitter, Dropbox, and Google Services. With these workflows, you can, for instance, automatically save email attachments to OneDrive or alert your team about specific tweets.It allows conditional logic (if...then...else statements), which makes it more than just a task automation tool, but also a potent business process automation platform. It can handle complex scenarios, not just single task automation.Power Automate offers pre-built templates for common tasks, reducing the technical barrier for non-programmers. However, it also provides robust tools for developers to create more complex automations.Overall, Power Automate is a powerful tool for improving efficiency, reducing errors and saving time by automating business tasks and processes.Document Automation Workflows in Power AutomateDocument automation workflows in Power Automate represent one of its most compelling features, enabling businesses to automate repetitive document management tasks, enhance efficiency, and reduce the possibility of human errors.At its core, Power Automate allows you to create and manage workflows involving documents, including generating, editing, sharing, and storing. This can involve Microsoft services like Word, Excel, and SharePoint, or third-party services like Google Docs, Adobe PDF, and Dropbox. This cross-platform functionality is a key strength, enabling diverse document workflows.For instance, an approval workflow could be created, automating the process of document approval. Here, Power Automate can detect when a new document is added to a SharePoint folder or a OneDrive directory. It can then automatically send an email to the appropriate person with a link to the document for review. Once the document is approved or rejected, the status can be updated, and notifications sent to relevant stakeholders.Another example could be a document generation and storage workflow. Data from Microsoft Forms or Dynamics 365 could be used to automatically generate documents in Word or Excel. The created document could then be converted into a PDF and stored in a specific SharePoint folder or sent via email.Document automation also extends to areas like data extraction and integration. Power Automate can connect with Nanonets, enabling users to extract specific data from a document, such as invoice or receipt details, and automatically update a record in Dynamics 365 or an Excel spreadsheet.Moreover, Power Automate’s flexibility also allows developers to handle more complex scenarios. Custom connectors can be created to interact with services not directly supported by Power Automate. Error handling and conditional logic can be implemented to account for different workflow outcomes.In conclusion, Power Automate's document automation workflows can transform time-consuming manual processes into efficient automated tasks, freeing employees to focus on more value-driven activities. Its versatility, simplicity, and deep integration with various services make it a vital tool for any organization seeking to streamline their document management processes.OCR and PDF Data Extraction in Power AutomateNanonets is a powerful tool which offers pretrained data extraction models that can extract useful data from documents. We support all common document types, and can easily train specialized models custom document types. Leveraging Nanonets API in Power Automate opens up possibilities for developing highly efficient automated workflows, particularly in document data extraction.To understand this, let's look at a common business scenario. Imagine a company receives a large volume of invoices daily. With Nanonets and Power Automate, you could automate the process of extracting the necessary data from these invoices and store it in a database or use it in another application like Dynamics 365.Here's a step-by-step exampleAn invoice document is received and uploaded to a SharePoint folder or received as an email attachment in Office 365 Mail.A Power Automate workflow triggers upon the addition of this new document. Using the "When a file is created" or "When a new email arrives" trigger, Power Automate can automatically detect the new invoice.The workflow then sends the document to the Nanonets API via a HTTP POST request. This could be done by using a Custom Connector or the built-in HTTP action in Power Automate.Nanonets processes the document with its machine learning model, specifically trained for invoice data extraction, and returns the extracted data in a structured format, like JSON.The Power Automate workflow receives this data and can then parse and use it as required. This could involve updating an Excel spreadsheet, creating a new item in a SharePoint list, or updating a record in Dynamics 365.In the Dynamics 365 context, Nanonets' ready-to-use integration with Dynamics 365 would make the process even more seamless. Let's explore another scenario to illustrate thisAn invoice document is uploaded into Dynamics 365 as an attachment to a specific record.A Power Automate workflow is triggered based on this action. The workflow then sends the invoice document to the Nanonets API for processing, taking advantage of the ready-to-use integration that Nanonets offers with D365.Once the data is returned from Nanonets, the workflow then parses the structured data and updates the relevant fields in the Dynamics 365 record. This could include details like the invoice number, date, total amount, etc.These workflows help to automate what can typically be a labor-intensive process, saving significant amounts of time and reducing the risk of human error. Moreover, they leverage the power of machine learning to accurately extract required data, even from complex or varying invoice formats.In addition to invoices, this process can be applied to a range of other document types - receipts, purchase orders, delivery notes, etc. Each document type would require a machine learning model trained for that specific document, which Nanonets is capable of providing.Nanonets' integration with Power Automate and Dynamics 365 opens up significant possibilities for businesses looking to automate their document data extraction workflows. These integrations make it easier for organizations to harness the power of machine learning in their everyday processes, leading to greater operational efficiency and accuracy.Here are a variety of examples showcasing how Nanonets can be utilized in Power Automate for different automated document data extraction workflowsExpense Reports Scan uploaded receipts in SharePoint, extract data with Nanonets, and automatically populate an Excel sheet for expense tracking.Contract Management Upload contracts to a specific OneDrive folder, extract key details like parties involved, dates, and clauses using Nanonets, and update a SharePoint list for contract management.Invoice Processing Send invoices received via email to Nanonets for data extraction, and use the returned data to create or update records in Dynamics 365 Finance.Order Fulfillment Extract data from purchase orders uploaded to a Teams channel using Nanonets, and trigger a Power Automate workflow to create a new order in Dynamics 365 Supply Chain Management.HR Onboarding When new employee documents are added to a SharePoint folder, extract key details like name, job title, and start date with Nanonets, and then create a new employee record in Dynamics 365 Human Resources.Customer Correspondence Extract key information from customer letters or emails using Nanonets, and automatically create or update a customer service case in Dynamics 365 Customer Service.Project Management When a new project proposal is added to a Teams channel, use Nanonets to extract key details like project title, proposed timeline, and budget, and create a new project record in Dynamics 365 Project Operations.Sales Lead Generation Extract data from business cards using Nanonets and use the returned data to create new leads in Dynamics 365 Sales.Insurance Claims When an insurance claim form is uploaded to a SharePoint folder, extract the claim details with Nanonets, and update a claim record in a custom-built Power App.Health Records When medical documents are uploaded to a secure OneDrive folder, extract patient data with Nanonets, and update the patient's record in a healthcare management application.How to set up Nanonets in Power AutomateSetting up Nanonets in Power Automate involves building a custom connector. Here is a step-by-step guide to creating a custom connector for the Nanonets API1. Get your API Key from NanonetsThe first step is to generate an API key from your Nanonets account. This key will be used to authenticate your requests to the Nanonets API. You can find instructions on how to get your API key here.2. Create a custom connector in Power AutomateNavigate to https//flow.microsoft.com and sign in to your account.From the left navigation bar, select "Data" and then "Custom connectors".Click "+ New custom connector" and choose "Create from blank".Give your connector a name and click "Continue".3. Set up the general detailsFor "Scheme", choose "HTTPS".In the "Host" field, enter the Nanonets API base URL (it should be something like "app.nanonets.com").Click "Security" in the navigation panel on the left.4. Set up the security details and connector actionsNote For this section, you can use the Nanonets API Documentation to configure the security and action details.Define the security details. You can use the api-key authentication method to authenticate using your API key.Create a New Action.Define and fill details of your Nanonets model prediction endpoint to create the action.6. Test the connectorClick "Test" in the navigation panel on the left.You may need to create a new connection. If so, click "+ New connection".Choose an action to test, fill in any required inputs, and click "Test operation".Once the custom connector is set up, you can use it in your Power Automate flows just like any other connector. You'll be able to choose the actions you defined for the connector and use the data returned from Nanonets in other actions within your workflow.Nanonets OCR for Automated Workflows in Power AutomateIn conclusion, Nanonets OCR is a powerful addition to Power Automate's arsenal of automation capabilities, providing opportunities to streamline and improve document data extraction processes in workflows. Its powerful machine learning algorithms are designed to decipher text from various document types accurately, offering a solution that goes beyond traditional OCR technology by learning from the data it processes and improving over time.With Power Automate's flexible and robust platform, incorporating Nanonets OCR into automated workflows becomes a relatively straightforward process. Power Automate's ability to create custom connectors allows integration with Nanonets API, opening up a multitude of use cases. Whether it's extracting information from invoices, contracts, purchase orders, medical records, or any other type of document, the combined capabilities of Power Automate and Nanonets OCR can handle it.One of the key strengths of this integration is its ability to significantly reduce manual data entry and associated errors. By automating the data extraction process, businesses can improve the accuracy of their data and free up employees' time to focus on more value-added tasks.Furthermore, Nanonets' ready-to-use integration with Microsoft Dynamics 365 enhances the ability to directly apply the extracted data into various business applications, providing an end-to-end solution for document data extraction workflows.In a business environment increasingly leaning towards automation and digital transformation, tools like Power Automate and Nanonets OCR are becoming essential. They not only provide automation capabilities but also harness the power of machine learning, resulting in smarter, more efficient, and error-free business processes.Power Automate's accessibility and Nanonets' powerful OCR functionality are a potent combination that can significantly impact businesses. As we continue to witness the digital transformation of various industries, the integration of these two platforms will undoubtedly play an influential role in shaping efficient, automated, and intelligent business workflows.


FooBar is FooBad
FooBar is FooBad

FooBar is FooBad FooBar is a metasyntactic variable. A “specific word or set of words identified as a placeholder in computer science”, per wikipedia. It’s most abstract stand-in imaginable, the formless platonic ideal of a Programming Thing. It can morph into a variable, method or class with the barest change of capitalization and spacing. Like “widget”, it’s a catch-all generic term that lets you ignore the specifics and focus on the process. And it’s overused. Concrete > Abstract Human brains were built to deal with real things. We can deal with unreal things, but it takes a little bit of brainpower. And when learning a new language or tool, brainpower is in scarce supply. Too often, `FooBar` is used in tutorials when almost anything else would be better. Say I’d like to teach Python inheritance to a new learner. # Inheritance class Foo def baz(self) print("FooBaz!") class Bar(Foo) def baz(self) print("BarBaz!") A novice learner will have no idea what the above code is doing. Is it `Bar` inheriting from `Foo` or vice versa? If it seems obvious to you that’s because you already understand the code! It makes sense because we already know how it works. Classic curse of knowledge. Why force learners to keep track of where Foo comes before Bar instead of focusing on the actual lesson? Compare that to this example using concrete, real-world, non-abstract placeholders # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") This is trite and reductive. But it works. It’s immediately clear which way the inheritance runs. Your brain leverages its considerable real-world knowledge to provide context instead of mentally juggling meaningless placeholder words. As a bonus, you effortlessly see that the Cat class is a noun/thing and the speak() method is verb/action. Concrete Is Better for Memory Even if a learner parses your tutorial, will they remember it? The brain remembers concrete words better than abstract ones.  Imagine a cherry pie, hot steaming, with a scoop of ice cream melting down the side. Can you see it?   Now try to imagine a “Foo”… Can you see it? Yeah, me neither. Concrete examples are also more unique. AnimalDog is more salient than FooBar in the same way “John is a baker” is easier to remember than someone’s name is “John Baker”. It’s called the Baker-Baker Effect.  Your brain is full of empty interchangeable labels like Foo, Bar, John Smith. But something with relationships, with dynamics and semantic meaning? That stands out. Concrete Is Extensible Lets add more examples to our tutorial. Sticking to Foo, I suppose I could dig into the Metasyntactic variable wikipedia page and use foobar, foo, bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy and thud. # Inheritance class Foo def qux(self) print("FooQux!") class Bar(Foo) def qux(self) print("BarQux!") class Baz(Foo) def qux(self) print("BazQux!") But by then, we’ve strayed from ‘beginner demo’ to ‘occult lore’. And the code is harder to understand than before! Using a concrete example on the other hand… # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") class Cat(Animal) def speak(self) print("Meow!") Extension is easy and the lesson is reinforced rather than muddied. Exercise for the reader See if you can rewrite these python examples on multiple inheritance in a non-foobar’d way. Better Than Foo Fortunately, there are alternatives out there. The classic intro Animal, or Vehicle and their attending subclasses. Or might I suggest using Python’s convention of spam, eggs, and hams? A five-year old could intuit what eggs = 3 means. There’s also cryptography’s Alice and Bob and co. Not only are they people (concrete), but there’s an ordinal mapping in the alphabetization of their names. As an added bonus, the name/role alliteration aids in recall. (Mallory is a malicious attacker. Trudy is an intruder) New Proposal Pies Personally, I think Pies make excellent example variables. They’re concrete, have categories (Sweet, Savory), subtypes (Fruit, Berry, Meat, Cream) and edge cases (Pizza Pies, Mud Pies). # Pies fruit = ['cherry', 'apple', 'fig', 'jam'] meat = ['pork', 'ham', 'chicken', 'shepherd'] nut = ['pecan', 'walnut'] pizza = ['cheese', 'pepperoni', 'hawaiian'] other = ['mud'] They also come baked-in with a variety of easy-to-grasp methods and attributes like slice(), bake(), bake_time or price. All of which can be implicitly understood. Though if pies aren’t your thing, there’s a whole world of concrete things to choose from. Maybe breads? ['bun', 'roll', 'bagel', 'scone', 'muffin', 'pita', 'naan'] Conclusion I’m not holding my breath for foobar to be abolished. It is short, easy, abstract, and (most importantly) established. Mentally mapping concrete concepts is hard. Analogies are tricky and full of false assumptions. Maps are not the territory. You’re trying to collapse life in all its complexity to something recognizable but not overly reductive or inaccurate. But the solution is not to confuse abstractness for clarity. For tutorials, extended docs and beginner audiences, skip foobar. Use concrete concepts instead, preferably something distinct that can be mapped onto the problem space. And if it gives implicit hierarchy, relationships, or noun/verb hinting, so much the better. Use FooBar when you’re trying to focus on the pure abstract case without extra assumptions cluttering the syntax. Use it in your console, debuggers, and when you’re talking to experienced programmers. But for anything longer than a brief snippet, avoid it. The post FooBar is FooBad appeared first on Simple Thread.


Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]