Settings Results in 4 milliseconds

Software Security Vs Performance
Category: Technology

According to my finding, it is heavily articulated in the Software Engineering Community that Securi ...


Views: 279 Likes: 99
Embracing ?????: Programming as Imitation of the Divine
Embracing ????? Programming as Imitation of the D ...

Within the field of software development, we are prone to gazing upon the future – new libraries, new tools. But from where did we come? The philosophical foundation of the field is largely absent from the contemporary zeitgeist, but our work is deeply rooted in the philosophical traditions of not only Logic, but Ontology, Identity, Ethics and so on. Daily, the programmer struggles with not only their implementation of logic but the ontological and identity questions of classifying and organizing their reality into a logical program. What is a User? What are its properties? What actions can be taken on it? “Oh the mundanity!” – cries the programmer. But in-deed, as we will explore here – you are doing God’s work! Because the work of programmers is not too dissimilar from that of philosophers throughout history, we can look to them for guidance on the larger questions of our own tradition. In this piece, we will focus mainly on the ancient Greeks and their metaphysical works. Guided by their knowledge, we can better incorporate Reason and Logic into our programs and strive to escape Plato’s Cave (https//en.wikipedia.org/wiki/Allegory_of_the_cave). Furthermore, because the results of our work is our reason manifested into reality, we must suffer under the greater burden of responsibility to aim towards the divine Reason. ????? [T]he spermatikos logos in each man provides a common, non-confessional basis in each man, whether as a natural or supernatural gift from God (or both), by which he is called to participate in God’s Reason or [?????], from which he obtains a dignity over the brute creation, and out of which he discovers and obtains normative judgments of right and wrong (https//lexchristianorum.blogspot.com/2010/03/st-justin-martyr-spermatikos-logos-and.html) The English word logic is rooted in the Ancient Greek ????? (Logos) – meaning “word, discourse or reason”. ????? is related to the Ancient Greek ???? (légo) – meaning “I say”, a cognate with the Latin legus or “law”. Going even further back, ????? derives from the PIE root *le?- which can have the meanings “I put in order, arrange, gather, I choose, count, reckon, I say, speak”. (https//en.wikipedia.org/wiki/Logos) The concept of the ????? has been studied and applied philosophically throughout history – going back to Heraclitus around 500 BC. Heraclitus described the ????? as the common Reason of the world and urged people to strive to know and follow it. “For this reason it is necessary to follow what is common. But although the ????? is common, most people live as if they had their own private understanding.” (Diels–Kranz, 22B2) With Aristotelian, Platonic and early Stoic thought, the ????? as universal and objective Reason and Logic was further considered and defined. ????? was seen by the Stoics as an active, material phenomenon driving nature and animating the universe. The ????? spe?µat???? (“logos spermatikos”) was, according to the Stoics, the principle, generative Reason acting in inanimate matter in the universe. Plutarch, a Platonist, wrote that the ????? was the “go-between” between God and humanity. The Stoics believed that humans each possess a part of the divine ?????. The ????? was also a fundamental philosophical foundation for early Christian thought (see John 11-3). The ????? is impossible to concisely summarize. But for the purpose of this piece, we can consider it the metaphysical (real but immaterial) universal Reason; an infinite source of Logic and Truth into which humans tap when they reason about the world. Imitation of the Divine In so far as the spirit is also a kind of ‘window on eternity’… it conveys to the soul a certain influx divinus… and the knowledge of a higher system of the world (Jung, Carl. Mysterium Coniunctionis) What is “imitation of the divine”? One could certainly begin by considering what the alternative would be. A historical current has existed in the philosophical tradition of humanity’s opportunity and responsibility to turn to and harness the divine ????? in their daily waking life. With language and thought we reason about the material and immaterial. As Rayside and Campbell declared in their defense of traditional logic in the field of Computer Science – “But if what is real and unchanging (the intelligible structure in things) is the measure of what we think about it (concept) and speak (word) about it, then it too is a work of reason not our reason, for our reason is the measured, but of Reason.” (Rayside, D, and G Campbell. Aristotle and Object-Oriented Programming Why Modern Students Need Traditional Logic. https//dl.acm.org/doi/pdf/10.1145/331795.331862.) Plato, in his theory of the tripartite soul, understood that the ideal human would not suffer passions (??µ?e?d??, literally “anger-kind”) or desires (?p???µ?t????) but be led by the ????? innate in the soul (????st????). When human reasoning is concordant with Reason, for a moment, Man transcends material reality and is assimilated with the divine (the ?????). “Hence, so many of the great thinkers who have gone before us posited that the natural way in which the human mind gets to God is in a mediated way — via things themselves, which express God to the extent that they can.” (Rayside, Campbell) God here is the representative of the ????? – humanity can achieve transcendental knowledge by consideration (in the deepest sense of the word) of the things around them. The Programmer Assimilated It is simply foolish to pretend that human reason is not concerned with meaning, or that programming is not an application of human reason (Rayside, Campbell) The programmer must begin by defining things – material or conceptual. “We are unable to reason or communicate effectively if we do not first make the effort to know what each thing is.” (Rayside, Campbell) By considering the ontological questions of the things in our world, in order to represent them accurately (and therefore ethically) in our programs, the programmer enters into the philosophical praxis. Next, the programmer adds layers of identity and logic on top of their ontological discovery, continuing in the praxis. But the programmer takes it a step further – the outcome of their investigation is not only their immaterial thought but, in executing the program, the manifestation of their philosophical endeavor into material reality. The program choreographs trillions of elementary charges through a crystalline maze, harnessing the virtually infinite charge of the Earth, incinerating the remains of starlight-fueled ancient beings in order to realize the reasoning of its programmer. Here the affair enters into the realm of Ethics. “The programmer is attempting to solve a practical problem by instructing a computer to act in a particular fashion. This requires moving from the indicative to the imperative from can or may to should. For a philosopher in the tradition, this move from the indicative to the imperative is the domain of moral science.” (Rayside, Campbell) Any actions taken by the program are the direct ethical responsibility of the programmer. Furthermore, the programmer, as the source of reason and will driving a program, manifesting it into existence, becomes in that instant the ????? spe?µat???? (“logos spermatikos”) incarnate. The programmer’s reason, tapped into the divine Reason (?????), is generated into existence in the Universe and commands reasonable actions of inanimate matter. Feeble Earthworm What sort of freak then is man? How novel, how monstrous, how chaotic, how paradoxical, how prodigious! Judge of all things, feeble earthworm, repository of truth, sink of doubt and error, glory and refuse of the universe! (Pascal, B. (1670). Pensées.) Pascal would be even more perplexed by the paradox of the programmer – in search of Logic and simultaneously materializing their logic; their “repository of truth” a hand emerging from the dirt reaching towards the ?????. Programmers are equals among the feeble earthworms crawling out of Plato’s cave. We enjoy no extraordinary access to Reason and yet bear a greater responsibility as commanders of this technical revolution in which we find ourselves. While the Greeks had an understanding of the weight of their work, their impact was restricted to words. The programmer’s work is a true hypostatization or materialization of the programmer’s reason. As programmers – as beings of Reason at the terminal of this grand system – we should most assuredly concern ourselves with embracing and modeling ourselves and our work after the divine and eternal ?????. The post Embracing ????? Programming as Imitation of the Divine appeared first on Simple Thread.


SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity?   If not int ...


Views: 0 Likes: 73
Tips Tricks When Drawing Realistic Photo
Category: Art

One of the important mistakes to avoid when drawing a detailed picture is to damage th ...


Views: 0 Likes: 29
Food for Software Developers
Category: Health

These notes are based on my own findings, they are not off ...


Views: 266 Likes: 86
Software Development Architecture and Good Practic ...
Category: System Design

These notes are used to drill down into the most op ...


Views: 0 Likes: 33
Statistics Best Books and Machine Learning Resourc ...
Category: Technology

h ...


Views: 0 Likes: 28
How to Automate Income for a small Business in 202 ...
Category: Research

Diversifying income streams is a smart strategy for small businesses to reduce risk and explore ...


Views: 0 Likes: 6
Lead Software Engineer
Category: Jobs

LawnStarter is a marketplace that makes lawn care easy for homeowners while helping small busines ...


Views: 0 Likes: 34
what is OEM Pack in cpu
Category: Servers

OEM stands for Original Equipment Manufacturer, which refers to a company that produces hardware ...


Views: 0 Likes: 16
How to Stop Wasting Time in Pointless Meetings: 5 Things to Improve Your Meetings
How to Stop Wasting Time in Pointless Meetings 5 ...

Have you ever left a meeting feeling like you just wasted an hour (or more) of your day? You’re not alone. Many people have experienced the frustration of attending meetings that are disorganized, unproductive, and seemingly pointless. That’s where the Level 10 meeting agenda comes in. The Level 10 is part of the larger Entrepreneurial Operating System® (EOS). EOS is a comprehensive set of practical tools and concepts that have helped thousands of small to medium size organizations worldwide achieve their business goals – including Simple Thread! One of the most popular components of EOS is the Level 10 meeting, a weekly meeting that is designed to be highly efficient, productive, and engaging. So, how do you make a meeting efficient, productive, and engaging? Here are 5 things that work for us 1. Same Bat Time, Same Bat Channel First and foremost, the meeting should take place on the same day and time each week. The meeting follows a strict agenda, which includes several key items that are critical to its success. I will share more about these next. 2. Be Present An opening segue provides the opportunity to shift the team’s attention from the distractions of the latest Slack chat or email that needs a reply and bring the focus to the present. At the start of the meeting, I might ask everyone to share their “best personal and best professional highlight” of the previous week. This can help set a positive tone and encourage everyone to engage in the meeting. Another great meeting opener is the “rose, thorn, and bud” method, which is a design thinking tool that helps identify what’s working (rose), what’s not (thorn), and what can be improved (bud).   “If You Can’t Measure it, You Can’t Improve it” – Peter Drucker 3. You Gotta Track Something The meeting then moves on to review the key performance indicators (KPIs) or scorecard for the department. This provides a weekly check-in on the numbers that are leading indicators of success and drive conversation around areas of opportunity or concern. What you track may vary by department, for marketing, we look at website traffic, conversions, and inbound leads to name a few! 4. Have S.M.A.R.T, Realistic Quarterly Goals Next, the team discusses their quarterly goals and reports on whether they are on track or off track towards this goal. This helps ensure that everyone is aligned on the department’s priorities and progress towards achieving them. If someone is “off track”, it gets added to the agenda for discussion and for the group to find ways to support and help get the project moving in the right direction.   “If You Don’t Know Where You Are Going, You’ll End Up Someplace Else” – Yogi Berra 5. Identify. Discuss. Solve. The meeting then moves on to the most crucial part of the Level 10 meeting tackling issues as a team. This is when I will guide the team through the IDS process Identify, Discuss, and Solve. The team identifies the real issue, discusses it from all angles, and then settles on a solution and one or two action points to implement the solution. And Now, to Wrap Things Up Like a Present… As the meeting comes to a close, the team takes five minutes to wrap up. This includes recapping the to-do list, sharing information from the meeting with the rest of the organization, and giving the meeting a grade on a scale of 1 to 10. EOS emphasizes that the most important criterion for grading the meeting is how well the team followed the agenda. So there you have it! A recipe for a meeting that is productive, efficient, and engaging! The Level 10 meeting is a powerful tool for organizations looking to run efficient and productive meetings. By following a strict agenda and incorporating key components like KPIs, quarterly goals, and the IDS process, teams can stay aligned and make progress towards achieving their business objectives. Try it out and let us know what you think  – and say goodbye to wasted time and hello to more productive, engaging meetings! The post How to Stop Wasting Time in Pointless Meetings 5 Things to Improve Your Meetings appeared first on Simple Thread.


How do I free up space on linux vm
Category: Research

Title Maximizing Linux Virtual Machine Performance Freeing Up Space and Optimizing Disk Usage< ...


Views: 0 Likes: 0
An error occurred during the compilation of a reso ...
Category: .Net 7

Question Why is this error happening? "An error occurred during the compilation of a resource re ...


Views: 0 Likes: 33
Asp.Net 5 Development Notes (DotNet Core 3.1 Study ...
Category: Software Development

Study Notes to use when progra ...


Views: 423 Likes: 61
filezilla error while writing: received failure wi ...
Category: Network

Problem Filezilla error while writing received failure with description 'Failure' Error File t ...


Views: 5573 Likes: 115
Sr. Software Engineer
Category: Technology

As one of our engineers, you&rsquo;ll help guide key development and technology decisions in our ...


Views: 0 Likes: 51
DotNet Software Development and Performance Tools
Category: .Net 7

[11/11/2022] Bombardia Web Stress Testing Tools<a h ...


Views: 0 Likes: 75
How to Optimize Software performance
Category: Computer Programming

Software performance is very important, early 201 ...


Views: 0 Likes: 31
[Solved] How Resolve Suspected Database in Microso ...
Category: SQL

Question How do you remove the status of "Emergency" from the ...


Views: 168 Likes: 68
RedisTimeoutException: Timeout awaiting response ( ...
Category: Other

RedisTimeoutException Timeout awaiting response (outbound=0KiB, inbound=0KiB, 5094ms elapsed, ti ...


Views: 0 Likes: 16
[Video] Learn How To Learn Fast
Category: Technology

This video will teach you how great inventors learn fast and types of learning. This is very helpful ...


Views: 272 Likes: 98
What is Computer Programming
Category: Computer Programming

<div class="group w-full text-gray-800 darktext-gray-100 border-b border-black/10 darkborder-gray- ...


Views: 0 Likes: 17
How to Use a well Known Drawing Method to Achieve ...
Category: Art

The Grid Method is a method of drawing an outline from a reference photo onto paper. T ...


Views: 0 Likes: 35
Amazon is hiring SDE2s
Category: Jobs

Amazon is hiring SDE2s all around the US, Canada and Mexico!!! (No 3rd parties. Thanks!)Ple ...


Views: 43 Likes: 41
Why Software Design and Architecture is very impor ...
Category: Computer Programming

Thorough System Analysis becomes vital t ...


Views: 0 Likes: 31
How to solve problems
Category: Software Development

Instead of asking what problems should I solve. Ask, what problems do I wish someone else would s ...


Views: 303 Likes: 121
Software Development Good Practices
Category: .Net 7

Knowledge Collected Over the Years of Developing Design your soft ...


Views: 231 Likes: 70
Technical Project Manager
Category: Jobs

"IMMEDIATE REQUIREMENT" Please share the suitableprofile to&nbsp;<a href="mailtoelly.jack ...


Views: 0 Likes: 32
Good Problem Solving Tip
Category: Software Development

<span style="font-size 12pt; font-family Arial; background-color ...


Views: 317 Likes: 108
[Software Development] Discover ErnesTech Step-by- ...
Category: Computer Programming

At ErnesTech, we take a collaborative approach to ensure your satisfaction and success. Our seaml ...


Views: 0 Likes: 33
Writing Tips for Improving Your Pull Requests
Writing Tips for Improving Your Pull Requests

You’ve just finished knocking out a complex feature. You’re happy with the state of the code, you’re a bit brain-fried, and the only thing between you and the finish line is creating a pull request. You’re not going to leave the description field blank, are you? You’re tired, you want to be done, and can’t people just figure out what you did by looking at the code? I get it. The impulse to skip the description is strong, but a little effort will go a long way toward making your coworker’s lives easier when they review your code. It’s courteous, and–lucky for you!–it doesn’t have to be hard. If you’re thinking I’m going to suggest writing a book in the description field, you’re wrong. In fact, I’m going to show you how to purposely write less by using the techniques below. Make it Scannable If your code is a report for the board of directors, your pull request description is the executive summary. It should be short and easy to digest while packing in as much important information as possible. The best way to achieve this combination is to make the text scannable. You can use bold or italic text to draw emphasis to important details in a paragraph. However, the best way to increase scan-ability is the liberal application of bulleted lists. Most of my PR descriptions start like this If merged, this PR will Add a Widget model Add a controller for performing CRUD on Widgets Update routes.rb to include paths for Widgets Update user policies to ensure only admins can delete Widgets Add tests for policy changes … There are a few things to note here. I’m using callouts to bring attention to important changes, including the object that’s being added and important files that are being modified. The sentences are short and digestible. They contain one useful piece of information each. And, for readability, they all start with a capital letter and end with no punctuation. Consistency of formatting makes for easier reading. Speak Plainly Simpler words win if you’re trying to quickly convey meaning, and normal words are preferable to jargon. Here are a few examples * Replace utilize with use. They have different meanings, and you’re likely wanting the meaning of use, which has the added bonus of being fewer characters. * Replace ask with request. “The ask here is to replace widget A with widget B.” Ask is not a noun; it’s a verb. * Replace operationalize with do. A savings of 12 characters and 5 syllables! There are loads of words that we use daily that could be replaced with something simpler; I bet you can think of a few off the top of your head. For more examples, see my book recommendations at the end of this article. Avoid Adverbs Piggybacking on the last suggestion, adverbs can often be dropped to tighten up your prose. Spotting an adverb is easy. Look for words that end in -ly. Really, vastly, quickly, slowly–these are adverbs and they usually can be removed without changing the meaning of your sentence. Here’s an example “Replace a really slowly performing ActiveRecord query with a faster raw SQL query” “Replace a slow ActiveRecord query with a faster raw SQL query” Since we dropped the adverbs, performing doesn’t work on its own, so we can remove it and save even more characters. Simplify Your Sentences Sentences can sometimes end up unnecessarily bloated. Take this example “The reason this is marked DO NOT MERGE is because we’re missing the final URL for the SSO login path.” The reason this is can be shortened to simply this is. The is before because is unnecessary and can be removed. And the last part of the sentence can be rejiggered to be more direct while eliminating an unnecessary prepositional phrase. The end result is succinct “This is marked DO NOT MERGE because we’re missing the SSO login path’s production URL.” Bonus Round Avoid Passive Voice Folks tend to slip into passive voice when talking about bad things like bugs or downtime. Uncomfortable things make people want to ensure they’re dodging–or not assigning–blame. I’m not saying you should throw someone under the bus for a bug, but it helps to be direct when writing about your code. “We were asked to implement the feature that caused this bug by the sales team.” The trouble here is were asked. This makes the sentence sound weak. Luckily, a rewrite is easy “The sales team asked us to implement the feature that caused this bug.” By moving the subject from the end of the sentence to the beginning, we ditch an unnecessary prepositional phrase by the sales team, shorten the sentence, and the overall meaning is now clear and direct. There’s More! But we can’t cover it all here. If you want to dig deeper, I recommend picking up The Elements of Style. It’s a great starting point for improving your writing. Also, Junk English by Ken Smith is a fun guide for spotting and avoiding jargon, and there’s a sequel if you enjoy it. The post Writing Tips for Improving Your Pull Requests appeared first on Simple Thread.


The ONNX Runtime extensions library was not found ...
Category: Research

Introduction------------ONNX (Open Neural Network Exchange) is an open-sour ...


Views: 0 Likes: 14
Software Development
Category: Technology

Software Development<div sty ...


Views: 304 Likes: 99
How to Resize Local Volume for a Virtual machine i ...
Category: LINUX

Question How do I resize the local volume on the VM in Proxmox?Answer&nbsp; 1. Give ...


Views: 0 Likes: 2
Software Development Refactoring Wisdom I gained t ...
Category: Software Development

Software Development Refactoring Wisdom I gained through R ...


Views: 175 Likes: 84
Software Developer (remote job) at Renalogic
Category: Jobs

Software Developer Compensation <span data-contrast="a ...


Views: 0 Likes: 44
Management Structure of the U.S. Bulk Electric System
Management Structure of the U.S. Bulk Electric Sys ...

Simple Thread is a digital product agency with a focus in the electric power industry. The power and electric utility industry is absolutely fascinating in its scale and complexity and we love sharing all of the interesting things we have learned. The topics may vary from facts about the grid, to green energy, energy sustainability, basic electrical engineering, the future of the grid, and everything in between. If you’d like to hear more, keep checking back! ———————————————- The United States power grid is quite possibly the most complex machine ever devised. As we discussed in the post A Tale of Two Grids the continental US power grid is actually made up of three separate synchronous grids called interconnections. These interconnections are The Eastern Interconnection The Western Interconnection The Texas Interconnection You can tell that clever names weren’t high on the todo list. Federal Energy Regulatory Commission These interconnections are massively complex, and managing, operating, and regulating them is a monumental task. At the top of the regulatory pyramid is FERC, the Federal Energy Regulatory Commission. FERC was formed in 1977 as a result of the Department of Energy Organization Act. The Department of Energy Organization Act abolished the previously created Federal Power Commission (FPC) and transferred its responsibilities to FERC. FERC is the federal agency that regulates the transmission and sale of electricity across state lines. However its powers extend far beyond the electric grid to regulate other forms of energy such as natural gas and oil. It also has the responsibility of ensuring the reliability and security of the nation’s bulk power system. It does this in part through the oversight and approval of reliability standards for the U.S. bulk electric system (any part of the grid operating at 100kV or higher). North American Electric Reliability Corporation FERC doesn’t actually create these standards though, it does this through a partnership with an international nonprofit called the North American Electric Reliability Corporation (NERC). NERC is the successor to the North American Electric Reliability Council (Also NERC), which was a voluntary industry association formed in the aftermath of the Northeast Blackout of 1965. The current NERC was formed out of the Energy Policy Act of 2005 in the aftermath of the 2003 Northeast blackout. The Energy Policy Act of 2005 mandated the creation of an “Electric Reliability Organization” (ERO) within the United States in order to enforce reliability standards on the bulk power grid. In 2006 FERC approved the newly overhauled NERC to be the Electric Reliability Organization (ERO) for the United States. NERC develops and oversees the enforcement of mandatory reliability standards across the United States, Canada, and parts of Mexico. It works with a variety of stakeholders including utility companies and other regulators to establish standards and best practices for operating the grid. It is also responsible for monitoring the grid’s performance, identifying risks, and performing audits to ensure compliance with its standards. Regional Entities You might have noticed that I said “oversees the enforcement of mandatory reliability standards” in the previous paragraph. Yes, NERC doesn’t actually enforce the standards, but instead delegates enforcement of their standards to six Regional Entities (REs). These Regional Entities are responsible for ensuring compliance with NERCs mandatory reliability standards within a specific region. They audit, assess, and investigate utilities and other electric grid participants for compliance with those reliability standards. They also work with NERC to develop additional regional reliability standards, if there are needs specific to where they operate. Reliability Coordinators Sitting alongside Regional Entities is another group of organizations known as Reliability Coordinators (RCs). Reliability Coordinators are certified by NERC and are the highest level organization responsible for the reliable functioning of the bulk electric system. RCs are primarily responsible for the real-time management of a specific area of the grid. They have a wide-area and real-time view of the grid and they monitor things like grid conditions, generation output, and transmission line statuses. RCs ensure that generation and demand is balanced, and is also responsible for issuing reliability alerts or implementing emergency procedure directives. For example, they might tell a particular utility to reduce load on a particular transmission line in response to a situation on the grid. Reliability Coordinators can be RTOs/ISOs (discussed below), other regional entities such as the Tennessee Valley Authority, or a single utility such as Southern Company. Balancing Authorities Balancing Authorities (BAs) are entities certified by NERC that are responsible for maintaining the balance between electricity supply and demand within a geographical area. There are currently 66 Balancing Authorities within the United States that range from large multi-state areas to small chunks of single states. BAs implement real-time grid operations such as dispatching generation, controlling electrical interchange with neighbors, and frequency regulation. Balancing Authorities can be RTOs/ISOs (discussed below), other regional entities such as the Tennessee Valley Authority, or a single utility such as Southern Company. RTOs and ISOs As if all of this wasn’t complicated enough, there are also nine organizations known as either Regional Transmission Organizations (RTOs) or Independent System Operators (ISOs). These organizations are responsible for the operation, planning, and management of the transmission grid and electricity markets within their respective regions. The main difference between ISOs and RTOs is that ISOs usually operate within a single state, while RTOs are larger regional entities. The naming is less than clear though, since both ISO New England (ISO-NE) and the Midcontinent ISO (MISO) are both RTOs. The reason for this is that ISOs were formed in 1996 as part of FERC Orders 888 and 889, while RTOs were not formed until FERC Order 2000 in 1999. MISO was made an ISO in 1998, but later also became the first RTO in 2001. RTOs/ISOs and Regional Entities may seem redundant at first glance, but they serve very different purposes. Regional Entities are responsible for enforcing NERCs reliability standards, including enforcing those standards against RTOs and ISOs. RTOs and ISOs can be audited by Regional Entities and be penalized for non-compliance. RTOs and ISOs have to implement those standards, but they are more concerned with coordinating, controlling, monitoring, and managing the grid and electricity markets within their region. One other very important distinction is that RTOs and ISOs are voluntary, and not all of the United States is covered by either type of organization. Parts of the Southeast and Northwest are two of the largest regions that are not covered by ISOs or RTOs. Therefore utilities within those regions must form agreements with other power companies or ISOs/RTOs they want to interconnect with or to trade power with. RTOs can also be ISOs, and they can also be Regional Entities, Reliability Coordinators, and Balancing Authorities! For example, PJM is the RTO for most of the mid-Atlantic and is also the region’s Regional Entity, Reliability Coordinator, and Balancing Authority. Transmission Operators And finally we get to the companies that do the work of actually transmitting the electricity! We call these Transmission Operators (TOPs). Transmission Operators are responsible for the maintenance, monitoring, and operation of the part of the transmission grid they own. The list of responsibilities these organizations have is a mile long, but at the end of the day they own a piece of the transmission grid and are responsible for it. There are organizations that sit around them that do things like enforce compliance and manage markets, but Transmission Operators are the folks that show up if there is an actual problem with the grid and fix it. They monitor their piece of the grid in real-time and communicate with balancing authorities and regional entities to ensure that everything is running smoothly. A Quick Example This is all a bit complex, so to clarify things a bit, let’s look at an example. Simple Thread is headquartered in Richmond, Virginia on the east coast of the United States and so we are physically located within the eastern interconnection and the hierarchy of entities that oversees this area is this FERC – The federal agency that regulates the transmission and sale of electricity across state lines. Approves and oversees the creation of standards that are created and enforced by NERC. NERC – The nonprofit organization responsible for establishing reliability standards for the North American power grid. SERC – The SERC Reliability Corporation (which originally stood for Southeastern Electric Reliability Council) is the Regional Entity for all of the southeast that NERC has delegated authority to in order to enforce NERC reliability standards. Virginia is split between two Regional Entities SERC and ReliabilityFirst. PJM – PJM Interconnection Inc. is the RTO that Virginia is located within. The acronym originally stood for Pennsylvania-New Jersey-Maryland, but its footprint is much larger now. PJM oversees a part of the bulk electric system in parts of 13 states and Washington DC that has more than 185 gigawatts of generation capacity. It is also the regional Reliability Coordinator and Balancing Authority for its region. Dominion Energy – The power company which is the local Transmission Operator (TOP), meaning that they actually build, maintain, and operate all of the local pieces of the bulk electric system that is overseen by PJM. Summary The management structure of the Bulk Power System in the United States is a complex system of organizations each with different responsibilities and authority. I hope that this article gives you a little better insight into the different organizations that exist and what they are responsible for. The post Management Structure of the U.S. Bulk Electric System appeared first on Simple Thread.


[Free eBook Creator] Make an eBook from your Notes
Category: Technology

Make an eBook for free [generate eBook from your Notes] in ...


Views: 14 Likes: 61
Linux Ubuntu Commands that will increase your prod ...
Category: Linux

Important Comman ...


Views: 498 Likes: 70
Android Studio Error: Cause:com.android.build.grad ...
Category: Android

Android Studio is a popular integrated development environment (IDE) for building Android applicati ...


Views: 277 Likes: 82
[Simplex and Strong duality] Algorithms
Category: Algorithms

<span style="font-size x-large; background-color #ccff33; font-we ...


Views: 257 Likes: 113
C#.NET Developer 3
Category: Jobs

<a title="C#.NET Developer 3" href="https//careers-quadax.icims.com/jobs/1961/c%23.net-software- ...


Views: 0 Likes: 39
Why Open Source Libraries are the Future of Softwa ...
Category: Computer Programming

We have seen famous Social Networks like Facebook being made using ...


Views: 0 Likes: 30
Top 10 sites for Creative Common Video and Music S ...
Category: SELF-HELP

Archive.org<a href="https ...


Views: 0 Likes: 46
Software Best Practices Learned by Experience
Category: System Design

[Updated] It is considered good practice to cache your data in memory, either o ...


Views: 0 Likes: 38
How to Neutralize the Biggest Threat to Your Online Security (You)
How to Neutralize the Biggest Threat to Your Onlin ...

Another day, another data breach.   Isn’t this all starting to seem a little too familiar? I don’t know about you, but the endless parade of disclosures is taking up entirely too much space in my news feed, pushing out important information on giant arcade cabinets and open source espresso machines. How is this still such a problem when we’ve all moved on to strong, randomly-generated, single-use passwords stored in password managers and multi-factor authentication? (Hold on, you haven’t done that? Go take care of that right now! I’ll wait.) Human Error Well, what do all these incidents have in common (besides giving CISOs heartburn)? Human error. Regardless of any other measures in place, at some point a human was given the sole responsibility for doing the right thing and they fumbled it. Hey, it happens. Even the smartest of us are extremely fallible creatures and this should surprise no one. What should be surprising is how, even armed with this knowledge, we insist on adopting security practices that assume anything we can usually get right we will always get right. Can you imagine living in a world where that was true? The initial foothold in most of these attacks was a successful phishing attempt. It might have been a counterfeit login page. It might have been a believable phone call from “customer service”. One way or another, someone was convinced to give out sensitive credentials to someone or something they shouldn’t have. It’s a classic because it works. You wouldn’t fall for that, right? You always check the headers and never click the links. You always hang up and call them back at the official number. You haven’t opened an email attachment since ActiveX roamed the earth. (Wow, it still does. Who knew?) But do you ever get tired? Or busy? Distracted, stressed, even hungry? No? I love the smell of swagger and hubris in the morning. Can you say the same thing about every one of your co-workers? How about your customers? Picture the least alert person you can imagine using a system you care about, and ask yourself why the integrity of that system should rely on their attentiveness. At least one of these incidents started with a push bombing. On the face of it those seem pretty easy to avoid, right? Just don’t approve MFA prompts unless you’re actually attempting to sign in. But there’s no rule that limits these attacks to times when you have your game face on. Do you really want to trust your security to your reactions when woken up at 3am by a nonstop stream of notifications, with your lizard brain still in charge of make bad noise stop? Would you agree that a system with a temperamental meat computer as a single point of failure is suboptimal if there are alternatives? If so my friend, I think you’re ready to hear about phishing-resistant MFA. What’s Wrong With Most MFA? Time-based One-Time Password (TOTP) authentication relies on a shared secret and a visible code. Only your authenticator app and the service you’re authenticating with know the secret for generating the correct code at any given moment. The service asks for the code, you provide it, and that proves to the service that you are you. But you get no such assurance from the service. This leaves you almost as vulnerable to phishing as if you weren’t using MFA at all. Instead of convincing you to share only your password the attacker also has to trick you into sharing your code, but the only real obstacle is whether they can act on that code before it expires. Another common approach is MFA via push notification. You attempt to access a service, it sends a push notification to your registered mobile device, you approve the access request, and that “proves” to the service that you’re the one attempting to log in. But as increasing numbers of push bombing incidents show, the fact that you were convinced to interact with a notification isn’t a guarantee of intentionality. MFA via SMS, email or voice is a train wreck, with all the same vulnerabilities as the methods above and some exciting unique additions like SIM swap attacks. Friends don’t let friends MFA this way. Which is naturally why it’s the only form of MFA most financial institutions support. Phishing-Resistant MFA This term applies to two categories of authentication. PKI-based MFA (public key infrastructure, generally encountered as smart cards) has been around for decades. But since it depends on having that infrastructure in place, and strong identity management, it’s generally the province of government agencies and large enterprises and is less supported by the types of services many of us use. The odds are good that if PKI makes sense for you you’re already using it and are in a better position to write about it than I am. But do it on your own time. A more appropriate option for most people is FIDO (Fast IDentity Online) authentication. Those links at the top of the post? I bet I snuck something past you. The last attack, on Cloudflare, didn’t actually result in a breach. Why not? Because everyone at Cloudflare authenticates with a FIDO2-compliant key that enforces origin binding with public key cryptography. Their write-up does a great job of explaining how the attack worked and how it would have played out if they were using standard TOTP MFA, but glosses over how it fizzled out when it ran into FIDO. Unlike TOTP, FIDO doesn’t rely on a single shared secret known to both the authenticator and the service. When a hardware key is registered with a service the device generates a new public-private key pair. The public key goes to the service, while the private key never leaves the secure storage of the device, where it’s tied to the identity of the service. During authentication, the service sends a challenge to the device. The device finds the private key tied to that service identity and uses it to sign the challenge. The service uses the public key to verify that the challenge was signed by the real private key and allows the connection. This process delivers some very powerful assurances. There is no user-facing code you can be tricked into revealing. Only the private key can successfully sign the challenge, so the service can be sure the hardware key is authentic. But the device will only be able to find a private key for the exact service it was registered with. It’s not going to be fooled by a phishing site at the wrong url, regardless of how good a forgery it is. The only way around the origin binding I’m aware of would be for the attacker to poison the victim’s DNS so their phishing site was accessible through the correct url for the real service and have a valid SSL certificate for that domain. That would involve a compromise of the user’s machine significant enough for the attacker to add their own certificate authority as a trusted root, or the ability to generate valid certificates for the service’s domain. If either of those are true you’re going to have a bad day regardless of the security process you’re using. FIDO also sidesteps the issues with push notifications by tying the authentication mechanism directly to the device attempting to authenticate. The hardware key is plugged into the web browsing device (literally or wirelessly) and all interaction between the key and the service goes through the web browser, initiated only by the user’s actions there. There’s no question that the user (or at least the key) is in fact present at the point of login.   I’m sure by now you’ve come up with at least one reason why FIDO sounds nice but would never work for you. Come at me. Does anyone even support this thing? You’d be surprised. Microsoft, Apple, Linux and Android all support FIDO at the system level. Browser compatibility is strong Chrome, Firefox, Edge, Safari, Opera, Vivaldi. The major cloud services providers are all covered, as well as common tools like Github and Dropbox. All this sounds great for proving that the key is present, but how does it prove I’m the one using it? What happens if it’s been stolen? That’s a great point. FIDO is definitely designed to counter remote attackers. Local attackers with physical access to your key aren’t part of the threat model the bulk of the specs are addressing. That’s why, even though FIDO2 in particular is touted as sufficient authentication unto itself, no passwords required, I myself would never go that far. This is where the “multi” in multi-factor authentication really comes into play. The hardware key is something you have, but I would still recommend requiring something you know, whether it’s a password on the account or a PIN on the key (which is absolutely something you can set). The options for unlocking the hardware key are largely up to the manufacturer, but many also come with biometric options like fingerprint readers, so you can also throw something you are into the mix. What about when I lose the key? Yeah, don’t do that. Kidding! Best practice is to have at least one backup key, stored in a different location. The point of the hardware key is to prevent the private keys from ever being readable from outside, which means there’s no way to simply clone a backup. You’re going to need to register each key separately with each service. Not ideal, I know, but it doesn’t have to be as tedious as it sounds either. A common strategy is to only protect the most sensitive accounts with the hardware key directly, and to use TOTP for the rest, but to use a TOTP authenticator app that supports being locked behind the hardware key. This still provides some of the FIDO benefits (no one can access your authenticator without your key) while minimizing how often keys need to be registered with a new service. I’m never going to remember to have this thing with me. You don’t have a keychain? You still have options, by tethering keys to specific devices. Low-profile nano keys are available that can be left in a USB port, giving that machine a more or less permanent authentication connection. And many machines come with built-in trusted platform modules specifically for protecting this kind of information. Windows devices using Hello, Apple devices with Touch ID or Face ID, and some Android phones can all be used as authenticators. My phone isn’t supported as an authenticator. And the idea of plugging in a key every time I want to authenticate sounds ridiculous, let alone leaving something permanently attached to my phone. Hardware keys also come in NFC and Bluetooth flavors. Tap to auth! This sounds expensive. It’s very likely at least some of the devices you use regularly already support FIDO. But yes, hardware security keys aren’t cheap. Neither are identity theft or corporate data breaches.   There, did you get it out of your system? No? Or have you already dashed off to try it? Either way, let us know! The post How to Neutralize the Biggest Threat to Your Online Security (You) appeared first on Simple Thread.


How to Mount a Disk in Ubuntu
Category: Linux

If the CIFS utility is asking you for a Username and Password every time you attempt to mount -a ...


Views: 316 Likes: 119
Full Stack Software Developer
Category: Jobs

We have an opening for a Full Stack Software Developer. Please send resumes asap for our team to ...


Views: 0 Likes: 76
What's New: High Paying Jobs and How to stay Produ ...
Category: General

Hello Software Developers,Here is the update for this weekThis week at Er ...


Views: 0 Likes: 39
Technical Project Manager
Category: Jobs

"IMMEDIATE REQUIREMENT" Please share the suitableprofile to&nbsp;<a href="mailtoelly.jack ...


Views: 0 Likes: 29
Network Security Video
Category: Network

Introduction to Network Security [Video], be aware and protect your company's data. Vid ...


Views: 242 Likes: 87
Is AI going to take Software Development Jobs?
Category: Research

Artificial Intelligence (AI) is becoming increasingly prevalent in the software development indu ...


Views: 0 Likes: 32
Three Tools to Systemize Your Discoveries
Three Tools to Systemize Your Discoveries

Discoveries are one of the reasons I was excited to become a UX designer. Whether building a new product, rethinking an existing product, or incorporating new features, discoveries are an exciting time of exploration and collaboration to uncover what needs to be built and why. Discoveries also come with challenges as you often have to explore large amounts of information in a short amount of time and work with high levels of ambiguity. You also need to find ways to organize information, break down complexity, and identify gaps that require further exploration. Incorporating systematic thinking into the discovery process can help alleviate many of these challenges. When building systems, it’s helpful to incorporate tools that facilitate thinking systematically. Using tools like Obsidian, OOUX, and Notion can help you both stay organized in your research and make it easier for you to find and share information. This post explores each of these tools in more detail. 1. Obsidian for Research   Obsidian is a note taking application where you’re able to organize and connect related information using links. It’s built around the Zettelkasten method, a German term for a system of organizing and linking notes. Searchability In applications like Google Docs, organizing a large amount of information often requires creating separate documents for different concepts or user interviewers. This can make it challenging to search for specific terms or create connections between items. With Obsidian, you can take multiple notes within a single document, creating a workspace where you can easily navigate through various interviews or topics. This also allows for global search, which makes synthesizing much easier. Interconnectedness When taking notes, I often come across insights that are related to the topic at hand but are significant enough to deserve their own document or already have a related document where the note needs to be captured. It can be frustrating to have to stop and write down an insight in a separate place to ensure it isn’t lost. Obsidian addresses this by allowing you to create connections, known as bi-directional links, between different items. This helps you to establish relationships and easily navigate between related information, creating a better understanding of connections and insights. Visualization Obsidian also has a visualization tool that allows you to see the connections you’ve made visually. This helps to explore how concepts connect and visually grasp how often terms or concepts are used. By visualizing these connections, you can uncover the key insights and make connections that might have been missed otherwise. 2. Object-Oriented UX for Synthesizing and Organizing When completing research, the amount of information heading in can be overwhelming. It’s helpful to have a framework for organizing the information in a way that allows you to think about the product holistically and uncover gaps that need to be explored. This is where Object-Oriented UX (OOUX) comes into play. It’s a framework for synthesizing and organizing information that is useful for designers, developers, and end users. Traditionally, we organize what needs to be built around the actions or features that users need to take. However, this approach often leads to a linear way of building and organizing software, where we segment what we need to build into parts before we’ve thought about the whole. Before designing, it’s important we clearly understand the main parts of the system, and how they relate, so we can help users understand the relationships throughout the system as well. OOUX focuses on first exploring the objects, or main parts of the system, then defining the relationships between the objects, and then considers the actions that can be taken on each object. This shift in perspective allows for more interconnected thinking, which in turn, helps users understand how concepts relate within the product. You can also use this approach to organize the information from research in a structured way, which helps to clarify what needs to be built and uncover gaps earlier in the process. OOUX can be used flexibly, but is typically incorporated right in the middle of the Double-Diamond Process – after research and before wireframing. If you’d like to learn more, check out these OOUX Resources. 3. Notion for Requirements and Product Management Notion is another useful note-taking tool and is often used to help people manage tasks. You can also create databases where you can use properties, formulas, filters, and create different views of the information. Documenting Requirements When clients share more detailed information, it can be difficult to know how to organize everything to make sure it’s not lost in the mix. One especially helpful part of the OOUX process is creating an Object Map, which lists out all of the main pieces of functionality, with their attributes listed out as cards. This can help organize information surrounding requirements, values of attributes, details about the relationships between pieces of functionality, as well as information around calls to action. Notion is a great tool to use to organize all of this information. Not only can you list all of the information and place details inside of cards, but you also have the flexibility to create different views (table, list, kanban, timelines, etc.), and you have full control over the filtering and sorting. Project Management Tools like Airtable, ClickUp, and Shortcut allow you to create tables and relationships, but many have constraints in their hierarchies (i.e. Milestones, Epics, and Stories). Constraints can be useful, but if you need more flexibility, Notion allows you to build your own systems to model your product design and development process and can replace similar tools. Information Architecture Prototypes We build prototypes to test flows, layouts, and visual design using tools such as Figma, but it can be challenging to quickly test the system wide information architecture. Using related databases and building out pages using Notions structured UI, you can create information architecture based prototypes to test the foundational navigation and relationships. This ensures that you have all of the necessary pages, that one can navigate through the relationships in the system, and helps to determine if you’re missing any key relationships that need to be represented. The Value of Systematic Thinking Discoveries become more enjoyable when we are equipped with tools to manage large amounts of information and have frameworks that help us break apart complexity. By embracing systematic thinking, we can stay grounded throughout the process, collaborate more effectively with stakeholders, and bring clarity to the end-users of our products. Hoping these tools prove useful on your upcoming explorations – happy discovering! The post Three Tools to Systemize Your Discoveries appeared first on Simple Thread.


[Free Databases] Open Source Databases
Category: Databases

This article will talk about the free open source database that can allow scalability. This article ...


Views: 316 Likes: 78
Senior Software Engineer - Product
Category: Jobs

Senior Software Engineer &ndash; Product &nbsp; Do you thrive on ...


Views: 0 Likes: 34
FooBar is FooBad
FooBar is FooBad

FooBar is FooBad FooBar is a metasyntactic variable. A “specific word or set of words identified as a placeholder in computer science”, per wikipedia. It’s most abstract stand-in imaginable, the formless platonic ideal of a Programming Thing. It can morph into a variable, method or class with the barest change of capitalization and spacing. Like “widget”, it’s a catch-all generic term that lets you ignore the specifics and focus on the process. And it’s overused. Concrete > Abstract Human brains were built to deal with real things. We can deal with unreal things, but it takes a little bit of brainpower. And when learning a new language or tool, brainpower is in scarce supply. Too often, `FooBar` is used in tutorials when almost anything else would be better. Say I’d like to teach Python inheritance to a new learner. # Inheritance class Foo def baz(self) print("FooBaz!") class Bar(Foo) def baz(self) print("BarBaz!") A novice learner will have no idea what the above code is doing. Is it `Bar` inheriting from `Foo` or vice versa? If it seems obvious to you that’s because you already understand the code! It makes sense because we already know how it works. Classic curse of knowledge. Why force learners to keep track of where Foo comes before Bar instead of focusing on the actual lesson? Compare that to this example using concrete, real-world, non-abstract placeholders # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") This is trite and reductive. But it works. It’s immediately clear which way the inheritance runs. Your brain leverages its considerable real-world knowledge to provide context instead of mentally juggling meaningless placeholder words. As a bonus, you effortlessly see that the Cat class is a noun/thing and the speak() method is verb/action. Concrete Is Better for Memory Even if a learner parses your tutorial, will they remember it? The brain remembers concrete words better than abstract ones.  Imagine a cherry pie, hot steaming, with a scoop of ice cream melting down the side. Can you see it?   Now try to imagine a “Foo”… Can you see it? Yeah, me neither. Concrete examples are also more unique. AnimalDog is more salient than FooBar in the same way “John is a baker” is easier to remember than someone’s name is “John Baker”. It’s called the Baker-Baker Effect.  Your brain is full of empty interchangeable labels like Foo, Bar, John Smith. But something with relationships, with dynamics and semantic meaning? That stands out. Concrete Is Extensible Lets add more examples to our tutorial. Sticking to Foo, I suppose I could dig into the Metasyntactic variable wikipedia page and use foobar, foo, bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy and thud. # Inheritance class Foo def qux(self) print("FooQux!") class Bar(Foo) def qux(self) print("BarQux!") class Baz(Foo) def qux(self) print("BazQux!") But by then, we’ve strayed from ‘beginner demo’ to ‘occult lore’. And the code is harder to understand than before! Using a concrete example on the other hand… # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") class Cat(Animal) def speak(self) print("Meow!") Extension is easy and the lesson is reinforced rather than muddied. Exercise for the reader See if you can rewrite these python examples on multiple inheritance in a non-foobar’d way. Better Than Foo Fortunately, there are alternatives out there. The classic intro Animal, or Vehicle and their attending subclasses. Or might I suggest using Python’s convention of spam, eggs, and hams? A five-year old could intuit what eggs = 3 means. There’s also cryptography’s Alice and Bob and co. Not only are they people (concrete), but there’s an ordinal mapping in the alphabetization of their names. As an added bonus, the name/role alliteration aids in recall. (Mallory is a malicious attacker. Trudy is an intruder) New Proposal Pies Personally, I think Pies make excellent example variables. They’re concrete, have categories (Sweet, Savory), subtypes (Fruit, Berry, Meat, Cream) and edge cases (Pizza Pies, Mud Pies). # Pies fruit = ['cherry', 'apple', 'fig', 'jam'] meat = ['pork', 'ham', 'chicken', 'shepherd'] nut = ['pecan', 'walnut'] pizza = ['cheese', 'pepperoni', 'hawaiian'] other = ['mud'] They also come baked-in with a variety of easy-to-grasp methods and attributes like slice(), bake(), bake_time or price. All of which can be implicitly understood. Though if pies aren’t your thing, there’s a whole world of concrete things to choose from. Maybe breads? ['bun', 'roll', 'bagel', 'scone', 'muffin', 'pita', 'naan'] Conclusion I’m not holding my breath for foobar to be abolished. It is short, easy, abstract, and (most importantly) established. Mentally mapping concrete concepts is hard. Analogies are tricky and full of false assumptions. Maps are not the territory. You’re trying to collapse life in all its complexity to something recognizable but not overly reductive or inaccurate. But the solution is not to confuse abstractness for clarity. For tutorials, extended docs and beginner audiences, skip foobar. Use concrete concepts instead, preferably something distinct that can be mapped onto the problem space. And if it gives implicit hierarchy, relationships, or noun/verb hinting, so much the better. Use FooBar when you’re trying to focus on the pure abstract case without extra assumptions cluttering the syntax. Use it in your console, debuggers, and when you’re talking to experienced programmers. But for anything longer than a brief snippet, avoid it. The post FooBar is FooBad appeared first on Simple Thread.


Describing UX Design, or the User Experience of Beer Labels
Describing UX Design, or the User Experience of Be ...

When I made the jump from my traditional graphic design role to a new position in UX/UI, I often found myself trying to describe the difference between the two. My friends and family would look at me with furrowed brows as I attempted to explain what the new role would entail. I realized in these strained conversations that the term “user experience” was apparently not as prevalent as I had come to believe. And graphic design seemed to be understood exclusively from the final product; the pretty picture, if you will. I found myself saying things like, “more problem solving and functionality; less art,” to the unfamiliar audience. But was that true? Was what I was doing in graphic design all that different? At Simple Thread, we often describe the UX process as a series of five phases – Research, Define, Prototype, Implement, Operate. It seems to me that these steps are critical to the design of just about anything, and I’d like to explore them through the lens of one of my favorite graphic design projects the craft beer label. Imagine we’re tasked with designing a beer label. That’s simple enough, right? We just need something that identifies the contents. Make it pretty. We need it by tomorrow. Is that enough to finalize a design? Maybe. But certainly not a good one. Let’s explore the steps of the design process. Step One Research For the design to be effective, we need to start by understanding the product, the brewery, the distribution plan, the audience, and the competition. We have a lot of questions to ask What is this beer? Are there any unique identifiers to this particular brew, like different hops or unusual ingredients? How does it differ from other beers at this brewery? How will it be distributed? Who is the audience? Answering these questions is critical to delivering a successful design. Just like in UX, we need to fully understand the problem before we can begin to solve it. The research phase is also multifaceted. We need to understand the product itself and the brewery that’s producing it, but we also need to consider the goals of the visual style. Looking for inspiration, sometimes on the supermarket shelf, will help us form a vision for things like color palette, typography, and illustration style. Step Two Define Once the product is identified, more definition around the label comes into play. There are technical considerations to take into account like the size of the vessel (often 12 or 16 ounces), the availability and timeline of the materials, and the intended release date. As we begin to think about designing the label, we need to know if it will be applied like a sticker to the can, shrink wrapped for full coverage, or printed on the metal itself. More involved production techniques are often only cost effective at high quantities, and understanding the process is critical to setting up our file correctly for the printer. There are also legal considerations for packaging; rules that are more stringent for alcohol sales than much else. Part of defining the design is identifying what components are essential to include. In the beer label world, those elements include the legal warning, the address of the brewing and canning facility, the size of the can, the ABV, and the name and style of beer. With, might I add, some awfully specific rules like the text size, placement, and even character count per square inch for some of the more legal components. We also need to define our goals for the visual language. Some breweries have visual systems that help buyers quickly understand what they’re getting. The blue one is a pilsner, the white is an IPA, etc. We need to decide if this label will fit within a pre-existing system, create a new system, or be a unique one-hit-wonder. Step Three Prototype Now it’s time to put our elements together and start creating the bones of the label. A low fidelity mockup for a beer label is a rough placement of the brewery logo, beer name, legal requirements, and illustration footprint to gain an understanding of how the elements will interact with each other. This wireframe, or sketch, gets wrapped around old cans and analyzed for things like type hierarchy and visual appeal. At this stage, we’re determining what elements should be most pronounced and why. If we’re designing for an established brewery with a solid reputation, there’s a good chance that a potential buyer might select this can based on the logo alone. If the beer itself is seasonal or includes unique ingredients, the name or style of beer might be the most compelling component. The prototyping phase includes early wireframes and sketches which will later transition to more completed designs. This can be a tricky step because the design needs to be completed enough to communicate the idea, but not so complete as to sink precious billable hours on detailed design work that may or may not make the cut. Step Four Implement After the trial and error of quick prototyping, it’s time to design the final label. This phase is what most people think of when they envision designing something – it’s the part when final colors, type, and layout come together to create something that didn’t exist before. It’s when the magic happens. But really, it’s the last step. The implementation of the design requires knowledge from the research, define, and prototype phases to create an effective final product. Step Five Operate The last phase of the UX process is to set the product free, let it operate, and measure the results. For our label, success may be measured by sales numbers in the taproom and in distribution, or by whether it was completed in time for canning day. Conclusion In UX, these five steps are a very iterative process. Results are measured and changes are implemented as new needs develop. The steps are not always adhered to in a perfectly linear fashion. On the flip side, once the beer label exists in the world, there it is. We can iterate on the next round of canning and make different design choices the second time around, but we don’t have the ability to update this version in real time. The design for a craft beer label sounds simple, but the design process is closely aligned to that of the UX/UI workflow. I am still learning the myriad of ways they differ – the unique programs and processes, the more in-depth research, the iterative nature, and the focus on the user’s needs. But as far as I can tell, all good design should follow these steps. Form should always follow function, and it’s never been just about pretty pictures. The post Describing UX Design, or the User Experience of Beer Labels appeared first on Simple Thread.


ASP.NET 8 Best Practices: Coding, Performance Tips ...
Category: .Net 7

In this chapter, we will explore various best practices and performance tips to enhance your ASP. ...


Views: 368 Likes: 98

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]