Integrity
Write
Loading...

Jamie Ducharme

3 years ago

How monkeypox spreads (and doesn't spread)

More on Science

Katrina Paulson

Katrina Paulson

3 years ago

Dehumanization Against Anthropomorphization

We've fought for humanity's sake. We need equilibrium.

Photo by Bekah Russom on Unsplash

We live in a world of opposites (black/white, up/down, love/hate), thus life is a game of achieving equilibrium. We have a universe of paradoxes within ourselves, not just in physics.

Individually, you balance your intellect and heart, but as a species, we're full of polarities. They might be gentle and compassionate, then ruthless and unsympathetic.

We desire for connection so much that we personify non-human beings and objects while turning to violence and hatred toward others. These contrasts baffle me. Will we find balance?

Anthropomorphization

Assigning human-like features or bonding with objects is common throughout childhood. Cartoons often give non-humans human traits. Adults still anthropomorphize this trait. Researchers agree we start doing it as infants and continue throughout life.

Humans of all ages are good at humanizing stuff. We build emotional attachments to weather events, inanimate objects, animals, plants, and locales. Gods, goddesses, and fictitious figures are anthropomorphized.

Cast Away, starring Tom Hanks, features anthropization. Hanks is left on an island, where he builds an emotional bond with a volleyball he calls Wilson.

We became emotionally invested in Wilson, including myself.

Why do we do it, though?

Our instincts and traits helped us survive and thrive. Our brain is alert to other people's thoughts, feelings, and intentions to assist us to determine who is safe or hazardous. We can think about others and our own mental states, or about thinking. This is the Theory of Mind.

Neurologically, specialists believe the Theory of Mind has to do with our mirror neurons, which exhibit the same activity while executing or witnessing an action.

Mirror neurons may contribute to anthropization, but they're not the only ones. In 2021, Harvard Medical School researchers at MGH and MIT colleagues published a study on the brain's notion of mind.

“Our study provides evidence to support theory of mind by individual neurons. Until now, it wasn’t clear whether or how neurons were able to perform these social cognitive computations.”

Neurons have particular functions, researchers found. Others encode information that differentiates one person's beliefs from another's. Some neurons reflect tale pieces, whereas others aren't directly involved in social reasoning but may multitask contributing factors.

Combining neuronal data gives a precise portrait of another's beliefs and comprehension. The theory of mind describes how we judge and understand each other in our species, and it likely led to anthropomorphism. Neuroscience indicates identical brain regions react to human or non-human behavior, like mirror neurons.

Some academics believe we're wired for connection, which explains why we anthropomorphize. When we're alone, we may anthropomorphize non-humans.

Humanizing non-human entities may make them deserving of moral care, according to another theory. Animamorphizing something makes it responsible for its actions and deserves punishments or rewards. This mental shift is typically apparent in our connections with pets and leads to deanthropomorphization.

Dehumanization

Dehumanizing involves denying someone or anything ethical regard, the opposite of anthropomorphizing.

Dehumanization occurs throughout history. We do it to everything in nature, including ourselves. We experiment on and torture animals. We enslave, hate, and harm other groups of people.

Race, immigrant status, dress choices, sexual orientation, social class, religion, gender, politics, need I go on? Our degrading behavior is promoting fascism and division everywhere.

Dehumanizing someone or anything reduces their agency and value. Many assume they're immune to this feature, but tests disagree.

It's inevitable. Humans are wired to have knee-jerk reactions to differences. We are programmed to dehumanize others, and it's easier than we'd like to admit.

Why do we do it, though?

Dehumanizing others is simpler than humanizing things for several reasons. First, we consider everything unusual as harmful, which has helped our species survive for hundreds of millions of years. Our propensity to be distrustful of others, like our fear of the unknown, promotes an us-vs.-them mentality.

Since WWII, various studies have been done to explain how or why the holocaust happened. How did so many individuals become radicalized to commit such awful actions and feel morally justified? Researchers quickly showed how easily the mind can turn gloomy.

Stanley Milgram's 1960s electroshock experiment highlighted how quickly people bow to authority to injure others. Philip Zimbardo's 1971 Stanford Prison Experiment revealed how power may be abused.

The us-versus-them attitude is natural and even young toddlers act on it. Without a relationship, empathy is more difficult.

It's terrifying how quickly dehumanizing behavior becomes commonplace. The current pandemic is an example. Most countries no longer count deaths. Long Covid is a major issue, with predictions of a handicapped tsunami in the future years. Mostly, we shrug.

In 2020, we panicked. Remember everyone's caution? Now Long Covid is ruining more lives, threatening to disable an insane amount of our population for months or their entire lives.

There's little research. Experts can't even classify or cure it. The people should be outraged, but most have ceased caring. They're over covid.

We're encouraged to find a method to live with a terrible pandemic that will cause years of damage. People aren't worried about infection anymore. They shrug and say, "We'll all get it eventually," then hope they're not one of the 30% who develops Long Covid.

We can correct course before further damage. Because we can recognize our urges and biases, we're not captives to them. We can think critically about our thoughts and behaviors, then attempt to improve. We can recognize our deficiencies and work to attain balance.

Changing perspectives

We're currently attempting to find equilibrium between opposites. It's superficial to defend extremes by stating we're only human or wired this way because both imply we have no control.

Being human involves having self-awareness, and by being careful of our thoughts and acts, we can find balance and recognize opposites' purpose.

Extreme anthropomorphizing and dehumanizing isolate and imperil us. We anthropomorphize because we desire connection and dehumanize because we're terrified, frequently of the connection we crave. Will we find balance?

Katrina Paulson ponders humanity, unanswered questions, and discoveries. Please check out her newsletters, Curious Adventure and Curious Life.

Will Lockett

Will Lockett

2 years ago

Thanks to a recent development, solar energy may prove to be the best energy source.

Photo by Zbynek Burival on Unsplash

Perovskite solar cells will revolutionize everything.

Humanity is in a climatic Armageddon. Our widespread ecological crimes of the previous century are catching up with us, and planet-scale karma threatens everyone. We must adjust to new technologies and lifestyles to avoid this fate. Even solar power, a renewable energy source, has climate problems. A recent discovery could boost solar power's eco-friendliness and affordability. Perovskite solar cells are amazing.

Perovskite is a silicon-like semiconductor. Semiconductors are used to make computer chips, LEDs, camera sensors, and solar cells. Silicon makes sturdy and long-lasting solar cells, thus it's used in most modern solar panels.

Perovskite solar cells are far better. First, they're easy to make at room temperature, unlike silicon cells, which require long, intricate baking processes. This makes perovskite cells cheaper to make and reduces their carbon footprint. Perovskite cells are efficient. Most silicon panel solar farms are 18% efficient, meaning 18% of solar radiation energy is transformed into electricity. Perovskite cells are 25% efficient, making them 38% more efficient than silicon.

However, perovskite cells are nowhere near as durable. A normal silicon panel will lose efficiency after 20 years. The first perovskite cells were ineffective since they lasted barely minutes.

Recent research from Princeton shows that perovskite cells can endure 30 years. The cells kept their efficiency, therefore no sacrifices were made.

No electrical or chemical engineer here, thus I can't explain how they did it. But strangely, the team said longevity isn't the big deal. In the next years, perovskite panels will become longer-lasting. How do you test a panel if you only have a month or two? This breakthrough technique needs a uniform method to estimate perovskite life expectancy fast. The study's key milestone was establishing a standard procedure.

Lab-based advanced aging tests are their solution. Perovskite cells decay faster at higher temperatures, so scientists can extrapolate from that. The test heated the panel to 110 degrees and waited for its output to reduce by 20%. Their panel lasted 2,100 hours (87.5 days) before a 20% decline.

They did some math to extrapolate this data and figure out how long the panel would have lasted in different climates, and were shocked to find it would last 30 years in Princeton. This made perovskite panels as durable as silicon panels. This panel could theoretically be sold today.

This technology will soon allow these brilliant panels to be released into the wild. This technology could be commercially viable in ten, maybe five years.

Solar power will be the best once it does. Solar power is cheap and low-carbon. Perovskite is the cheapest renewable energy source if we switch to it. Solar panel manufacturing's carbon footprint will also drop.

Perovskites' impact goes beyond cost and carbon. Silicon panels require harmful mining and contain toxic elements (cadmium). Perovskite panels don't require intense mining or horrible materials, making their production and expiration more eco-friendly.

Solar power destroys habitat. Massive solar farms could reduce biodiversity and disrupt local ecology by destroying vital habitats. Perovskite cells are more efficient, so they can shrink a solar farm while maintaining energy output. This reduces land requirements, making perovskite solar power cheaper, and could reduce solar's environmental impact.

Perovskite solar power is scalable and environmentally friendly. Princeton scientists will speed up the development and rollout of this energy.

Why bother with fusion, fast reactors, SMRs, or traditional nuclear power? We're close to developing a nearly perfect environmentally friendly power source, and we have the tools and systems to do so quickly. It's also affordable, so we can adopt it quickly and let the developing world use it to grow. Even I struggle to justify spending billions on fusion when a great, cheap technology outperforms it. Perovskite's eco-credentials and cost advantages could save the world and power humanity's future.

Michael Hunter, MD

Michael Hunter, MD

2 years ago

5 Drugs That May Increase Your Risk of Dementia

Photo by danilo.alvesd on Unsplash

While our genes can't be changed easily, you can avoid some dementia risk factors. Today we discuss dementia and five drugs that may increase risk.

Memory loss appears to come with age, but we're not talking about forgetfulness. Sometimes losing your car keys isn't an indication of dementia. Dementia impairs the capacity to think, remember, or make judgments. Dementia hinders daily tasks.

Alzheimers is the most common dementia. Dementia is not normal aging, unlike forgetfulness. Aging increases the risk of Alzheimer's and other dementias. A family history of the illness increases your risk, according to the Mayo Clinic (USA).

Given that our genes are difficult to change (I won't get into epigenetics), what are some avoidable dementia risk factors? Certain drugs may cause cognitive deterioration.

Today we look at four drugs that may cause cognitive decline.

Dementia and benzodiazepines

Benzodiazepine sedatives increase brain GABA levels. Example benzodiazepines:

  • Diazepam (Valium) (Valium)

  • Alprazolam (Xanax) (Xanax)

  • Clonazepam (Klonopin) (Klonopin)

Addiction and overdose are benzodiazepine risks. Yes! These medications don't raise dementia risk.

USC study: Benzodiazepines don't increase dementia risk in older adults.

Benzodiazepines can produce short- and long-term amnesia. This memory loss hinders memory formation. Extreme cases can permanently impair learning and memory. Anterograde amnesia is uncommon.

2. Statins and dementia

Statins reduce cholesterol. They prevent a cholesterol-making chemical. Examples:

  • Atorvastatin (Lipitor) (Lipitor)

  • Fluvastatin (Lescol XL) (Lescol XL)

  • Lovastatin (Altoprev) (Altoprev)

  • Pitavastatin (Livalo, Zypitamag) (Livalo, Zypitamag)

  • Pravastatin (Pravachol) (Pravachol)

  • Rosuvastatin (Crestor, Ezallor) (Crestor, Ezallor)

  • Simvastatin (Zocor) (Zocor)

Photo by Towfiqu barbhuiya on Unsplash

This finding is contentious. Harvard's Brigham and Womens Hospital's Dr. Joann Manson says:

“I think that the relationship between statins and cognitive function remains controversial. There’s still not a clear conclusion whether they help to prevent dementia or Alzheimer’s disease, have neutral effects, or increase risk.”

This one's off the dementia list.

3. Dementia and anticholinergic drugs

Anticholinergic drugs treat many conditions, including urine incontinence. Drugs inhibit acetylcholine (a brain chemical that helps send messages between cells). Acetylcholine blockers cause drowsiness, disorientation, and memory loss.

First-generation antihistamines, tricyclic antidepressants, and overactive bladder antimuscarinics are common anticholinergics among the elderly.

Anticholinergic drugs may cause dementia. One study found that taking anticholinergics for three years or more increased the risk of dementia by 1.54 times compared to three months or less. After stopping the medicine, the danger may continue.

4. Drugs for Parkinson's disease and dementia

Cleveland Clinic (USA) on Parkinson's:

Parkinson's disease causes age-related brain degeneration. It causes delayed movements, tremors, and balance issues. Some are inherited, but most are unknown. There are various treatment options, but no cure.

Parkinson's medications can cause memory loss, confusion, delusions, and obsessive behaviors. The drug's effects on dopamine cause these issues.

A 2019 JAMA Internal Medicine study found powerful anticholinergic medications enhance dementia risk.

Those who took anticholinergics had a 1.5 times higher chance of dementia. Individuals taking antidepressants, antipsychotic drugs, anti-Parkinson’s drugs, overactive bladder drugs, and anti-epileptic drugs had the greatest risk of dementia.

Anticholinergic medicines can lessen Parkinson's-related tremors, but they slow cognitive ability. Anticholinergics can cause disorientation and hallucinations in those over 70.

Photo by Wengang Zhai on Unsplash

5. Antiepileptic drugs and dementia

The risk of dementia from anti-seizure drugs varies with drugs. Levetiracetam (Keppra) improves Alzheimer's cognition.

One study linked different anti-seizure medications to dementia. Anti-epileptic medicines increased the risk of Alzheimer's disease by 1.15 times in the Finnish sample and 1.3 times in the German population. Depakote, Topamax are drugs.

You might also like

Bradley Vangelder

Bradley Vangelder

2 years ago

How we started and then quickly sold our startup

From a simple landing where we tested our MVP to a platform that distributes 20,000 codes per month, we learned a lot.

Starting point

Kwotet was my first startup. Everyone might post book quotes online.

I wanted a change.

Kwotet lacked attention, thus I felt stuck. After experiencing the trials of starting Kwotet, I thought of developing a waitlist service, but I required a strong co-founder.

I knew Dries from school, but we weren't close. He was an entrepreneurial programmer who worked a lot outside school. I needed this.

We brainstormed throughout school hours. We developed features to put us first. We worked until 3 am to launch this product.

Putting in the hours is KEY when building a startup

The instant that we lost our spark

In Belgium, college seniors do their internship in their last semester.

As we both made the decision to pick a quite challenging company, little time was left for Lancero.

Eventually, we lost interest. We lost the spark…

The only logical choice was to find someone with the same spark we started with to acquire Lancero.

And we did @ MicroAcquire.

Sell before your product dies. Make sure to profit from all the gains.

What did we do following the sale?

Not far from selling Lancero I lost my dad. I was about to start a new company. It was focused on positivity. I got none left at the time.

We still didn’t let go of the dream of becoming full-time entrepreneurs. As Dries launched the amazing company Plunk, and I’m still in the discovering stages of my next journey!

Dream!

You’re an entrepreneur if:

  • You're imaginative.

  • You enjoy disassembling and reassembling things.

  • You're adept at making new friends.

  • YOU HAVE DREAMS.

You don’t need to believe me if I tell you “everything is possible”… I wouldn't believe it myself if anyone told me this 2 years ago.

Until I started doing, living my dreams.

Jeff Scallop

Jeff Scallop

2 years ago

The Age of Decentralized Capitalism and DeFi

DeCap is DeFi's killer app.

The Battle of the Moneybags and the Strongboxes (Pieter Bruegel the Elder and Pieter van der Heyden)

“Software is eating the world.” Marc Andreesen, venture capitalist

DeFi. Imagine a blockchain-based alternative financial system that offers the same products and services as traditional finance, but with more variety, faster, more secure, lower cost, and simpler access.

Decentralised finance (DeFi) is a marketplace without gatekeepers or central authority managing the flow of money, where customers engage directly with smart contracts running on a blockchain.

DeFi grew exponentially in 2020/21, with Total Value Locked (an inadequate estimate for market size) topping at $100 billion. After that, it crashed.

The accumulation of funds by individuals with high discretionary income during the epidemic, the novelty of crypto trading, and the high yields given (5% APY for stablecoins on established platforms to 100%+ for risky assets) are among the primary elements explaining this exponential increase.

No longer your older brothers DeFi

Since transactions are anonymous, borrowers had to overcollateralize DeFi 1.0. To borrow $100 in stablecoins, you must deposit $150 in ETH. DeFi 1.0's business strategy raises two problems.

  • Why does DeFi offer interest rates that are higher than those of the conventional financial system?;

  • Why would somebody put down more cash than they intended to borrow?

Maxed out on their own resources, investors took loans to acquire more crypto; the demand for those loans raised DeFi yields, which kept crypto prices increasing; as crypto prices rose, investors made a return on their positions, allowing them to deposit more money and borrow more crypto.

This is a bull market game. DeFi 1.0's overcollateralization speculation is dead. Cryptocrash sank it.

The “speculation by overcollateralisation” world of DeFi 1.0 is dead

At a JP Morgan digital assets conference, institutional investors were more interested in DeFi than crypto or fintech. To me, that shows DeFi 2.0's institutional future.

DeFi 2.0 protocols must handle KYC/AML, tax compliance, market abuse, and cybersecurity problems to be institutional-ready.

Stablecoins gaining market share under benign regulation and more CBDCs coming online in the next couple of years could help DeFi 2.0 separate from crypto volatility.

DeFi 2.0 will have a better footing to finally decouple from crypto volatility

Then we can transition from speculation through overcollateralization to DeFi's genuine comparative advantages: cheaper transaction costs, near-instant settlement, more efficient price discovery, faster time-to-market for financial innovation, and a superior audit trail.

Akin to Amazon for financial goods

Amazon decimated brick-and-mortar shops by offering millions of things online, warehouses by keeping just-in-time inventory, and back-offices by automating invoicing and payments. Software devoured retail. DeFi will eat banking with software.

DeFi is the Amazon for financial items that will replace fintech. Even the most advanced internet brokers offer only 100 currency pairings and limited bonds, equities, and ETFs.

Old banks settlement systems and inefficient, hard-to-upgrade outdated software harm them. For advanced gamers, it's like driving an F1 vehicle on dirt.

It is like driving a F1 car on a dirt road, for the most sophisticated players

Central bankers throughout the world know how expensive and difficult it is to handle cross-border payments using the US dollar as the reserve currency, which is vulnerable to the economic cycle and geopolitical tensions.

Decentralization is the only method to deliver 24h global financial markets. DeFi 2.0 lets you buy and sell startup shares like Google or Tesla. VC funds will trade like mutual funds. Or create a bundle coverage for your car, house, and NFTs. Defi 2.0 consumes banking and creates Global Wall Street.

Defi 2.0 is how software eats banking and delivers the global Wall Street

Decentralized Capitalism is Emerging

90% of markets are digital. 10% is hardest to digitalize. That's money creation, ID, and asset tokenization.

90% of financial markets are already digital. The only problem is that the 10% left is the hardest to digitalize

Debt helped Athens construct a powerful navy that secured trade routes. Bonds financed the Renaissance's wars and supply chains. Equity fueled industrial growth. FX drove globalization's payments system. DeFi's plans:

If the 20th century was a conflict between governments and markets over economic drivers, the 21st century will be between centralized and decentralized corporate structures.

Offices vs. telecommuting. China vs. onshoring/friendshoring. Oil & gas vs. diverse energy matrix. National vs. multilateral policymaking. DAOs vs. corporations Fiat vs. crypto. TradFi vs.

An age where the network effects of the sharing economy will overtake the gains of scale of the monopolistic competition economy

This is the dawn of Decentralized Capitalism (or DeCap), an age where the network effects of the sharing economy will reach a tipping point and surpass the scale gains of the monopolistic competition economy, further eliminating inefficiencies and creating a more robust economy through better data and automation. DeFi 2.0 enables this.

DeFi needs to pay the piper now.

DeCap won't be Web3.0's Shangri-La, though. That's too much for an ailing Atlas. When push comes to shove, DeFi folks want to survive and fight another day for the revolution. If feasible, make a tidy profit.

Decentralization wasn't meant to circumvent regulation. It circumvents censorship. On-ramp, off-ramp measures (control DeFi's entry and exit points, not what happens in between) sound like a good compromise for DeFi 2.0.

The sooner authorities realize that DeFi regulation is made ex-ante by writing code and constructing smart contracts with rules, the faster DeFi 2.0 will become the more efficient and safe financial marketplace.

More crucially, we must boost system liquidity. DeFi's financial stability risks are downplayed. DeFi must improve its liquidity management if it's to become mainstream, just as banks rely on capital constraints.

This reveals the complex and, frankly, inadequate governance arrangements for DeFi protocols. They redistribute control from tokenholders to developers, which is bad governance regardless of the economic model.

But crypto can only ride the existing banking system for so long before forming its own economy. DeFi will upgrade web2.0's financial rails till then.

Samer Buna

Samer Buna

2 years ago

The Errors I Committed As a Novice Programmer

Learn to identify them, make habits to avoid them

First, a clarification. This article is aimed to make new programmers aware of their mistakes, train them to detect them, and remind them to prevent them.

I learned from all these blunders. I'm glad I have coding habits to avoid them. Do too.

These mistakes are not ordered.

1) Writing code haphazardly

Writing good content is hard. It takes planning and investigation. Quality programs don't differ.

Think. Research. Plan. Write. Validate. Modify. Unfortunately, no good acronym exists. Create a habit of doing the proper quantity of these activities.

As a newbie programmer, my biggest error was writing code without thinking or researching. This works for small stand-alone apps but hurts larger ones.

Like saying anything you might regret, you should think before coding something you could regret. Coding expresses your thoughts.

When angry, count to 10 before you speak. If very angry, a hundred. — Thomas Jefferson.

My quote:

When reviewing code, count to 10 before you refactor a line. If the code does not have tests, a hundred. — Samer Buna

Programming is primarily about reviewing prior code, investigating what is needed and how it fits into the current system, and developing small, testable features. Only 10% of the process involves writing code.

Programming is not writing code. Programming need nurturing.

2) Making excessive plans prior to writing code

Yes. Planning before writing code is good, but too much of it is bad. Water poisons.

Avoid perfect plans. Programming does not have that. Find a good starting plan. Your plan will change, but it helped you structure your code for clarity. Overplanning wastes time.

Only planning small features. All-feature planning should be illegal! The Waterfall Approach is a step-by-step system. That strategy requires extensive planning. This is not planning. Most software projects fail with waterfall. Implementing anything sophisticated requires agile changes to reality.

Programming requires responsiveness. You'll add waterfall plan-unthinkable features. You will eliminate functionality for reasons you never considered in a waterfall plan. Fix bugs and adjust. Be agile.

Plan your future features, though. Do it cautiously since too little or too much planning can affect code quality, which you must risk.

3) Underestimating the Value of Good Code

Readability should be your code's exclusive goal. Unintelligible code stinks. Non-recyclable.

Never undervalue code quality. Coding communicates implementations. Coders must explicitly communicate solution implementations.

Programming quote I like:

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. — John Woods

John, great advice!

Small things matter. If your indentation and capitalization are inconsistent, you should lose your coding license.

Long queues are also simple. Readability decreases after 80 characters. To highlight an if-statement block, you might put a long condition on the same line. No. Just never exceed 80 characters.

Linting and formatting tools fix many basic issues like this. ESLint and Prettier work great together in JavaScript. Use them.

Code quality errors:

Multiple lines in a function or file. Break long code into manageable bits. My rule of thumb is that any function with more than 10 lines is excessively long.

Double-negatives. Don't.

Using double negatives is just very not not wrong

Short, generic, or type-based variable names. Name variables clearly.

There are only two hard things in Computer Science: cache invalidation and naming things. — Phil Karlton

Hard-coding primitive strings and numbers without descriptions. If your logic relies on a constant primitive string or numeric value, identify it.

Avoiding simple difficulties with sloppy shortcuts and workarounds. Avoid evasion. Take stock.

Considering lengthier code better. Shorter code is usually preferable. Only write lengthier versions if they improve code readability. For instance, don't utilize clever one-liners and nested ternary statements just to make the code shorter. In any application, removing unneeded code is better.

Measuring programming progress by lines of code is like measuring aircraft building progress by weight. — Bill Gates

Excessive conditional logic. Conditional logic is unnecessary for most tasks. Choose based on readability. Measure performance before optimizing. Avoid Yoda conditions and conditional assignments.

4) Selecting the First Approach

When I started programming, I would solve an issue and move on. I would apply my initial solution without considering its intricacies and probable shortcomings.

After questioning all the solutions, the best ones usually emerge. If you can't think of several answers, you don't grasp the problem.

Programmers do not solve problems. Find the easiest solution. The solution must work well and be easy to read, comprehend, and maintain.

There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. — C.A.R. Hoare

5) Not Giving Up

I generally stick with the original solution even though it may not be the best. The not-quitting mentality may explain this. This mindset is helpful for most things, but not programming. Program writers should fail early and often.

If you doubt a solution, toss it and rethink the situation. No matter how much you put in that solution. GIT lets you branch off and try various solutions. Use it.

Do not be attached to code because of how much effort you put into it. Bad code needs to be discarded.

6) Avoiding Google

I've wasted time solving problems when I should have researched them first.

Unless you're employing cutting-edge technology, someone else has probably solved your problem. Google It First.

Googling may discover that what you think is an issue isn't and that you should embrace it. Do not presume you know everything needed to choose a solution. Google surprises.

But Google carefully. Newbies also copy code without knowing it. Use only code you understand, even if it solves your problem.

Never assume you know how to code creatively.

The most dangerous thought that you can have as a creative person is to think that you know what you’re doing. — Bret Victor

7) Failing to Use Encapsulation

Not about object-oriented paradigm. Encapsulation is always useful. Unencapsulated systems are difficult to maintain.

An application should only handle a feature once. One object handles that. The application's other objects should only see what's essential. Reducing application dependencies is not about secrecy. Following these guidelines lets you safely update class, object, and function internals without breaking things.

Classify logic and state concepts. Class means blueprint template. Class or Function objects are possible. It could be a Module or Package.

Self-contained tasks need methods in a logic class. Methods should accomplish one thing well. Similar classes should share method names.

As a rookie programmer, I didn't always establish a new class for a conceptual unit or recognize self-contained units. Newbie code has a Util class full of unrelated code. Another symptom of novice code is when a small change cascades and requires numerous other adjustments.

Think before adding a method or new responsibilities to a method. Time's needed. Avoid skipping or refactoring. Start right.

High Cohesion and Low Coupling involves grouping relevant code in a class and reducing class dependencies.

8) Arranging for Uncertainty

Thinking beyond your solution is appealing. Every line of code will bring up what-ifs. This is excellent for edge cases but not for foreseeable needs.

Your what-ifs must fall into one of these two categories. Write only code you need today. Avoid future planning.

Writing a feature for future use is improper. No.

Write only the code you need today for your solution. Handle edge-cases, but don't introduce edge-features.

Growth for the sake of growth is the ideology of the cancer cell. — Edward Abbey

9) Making the incorrect data structure choices

Beginner programmers often overemphasize algorithms when preparing for interviews. Good algorithms should be identified and used when needed, but memorizing them won't make you a programming genius.

However, learning your language's data structures' strengths and shortcomings will make you a better developer.

The improper data structure shouts "newbie coding" here.

Let me give you a few instances of data structures without teaching you:

Managing records with arrays instead of maps (objects).

Most data structure mistakes include using lists instead of maps to manage records. Use a map to organize a list of records.

This list of records has an identifier to look up each entry. Lists for scalar values are OK and frequently superior, especially if the focus is pushing values to the list.

Arrays and objects are the most common JavaScript list and map structures, respectively (there is also a map structure in modern JavaScript).

Lists over maps for record management often fail. I recommend always using this point, even though it only applies to huge collections. This is crucial because maps are faster than lists in looking up records by identifier.

Stackless

Simple recursive functions are often tempting when writing recursive programming. In single-threaded settings, optimizing recursive code is difficult.

Recursive function returns determine code optimization. Optimizing a recursive function that returns two or more calls to itself is harder than optimizing a single call.

Beginners overlook the alternative to recursive functions. Use Stack. Push function calls to a stack and start popping them out to traverse them back.

10) Worsening the current code

Imagine this:

Add an item to that room. You might want to store that object anywhere as it's a mess. You can finish in seconds.

Not with messy code. Do not worsen! Keep the code cleaner than when you started.

Clean the room above to place the new object. If the item is clothing, clear a route to the closet. That's proper execution.

The following bad habits frequently make code worse:

  • code duplication You are merely duplicating code and creating more chaos if you copy/paste a code block and then alter just the line after that. This would be equivalent to adding another chair with a lower base rather than purchasing a new chair with a height-adjustable seat in the context of the aforementioned dirty room example. Always keep abstraction in mind, and use it when appropriate.

  • utilizing configuration files not at all. A configuration file should contain the value you need to utilize if it may differ in certain circumstances or at different times. A configuration file should contain a value if you need to use it across numerous lines of code. Every time you add a new value to the code, simply ask yourself: "Does this value belong in a configuration file?" The most likely response is "yes."

  • using temporary variables and pointless conditional statements. Every if-statement represents a logic branch that should at the very least be tested twice. When avoiding conditionals doesn't compromise readability, it should be done. The main issue with this is that branch logic is being used to extend an existing function rather than creating a new function. Are you altering the code at the appropriate level, or should you go think about the issue at a higher level every time you feel you need an if-statement or a new function variable?

This code illustrates superfluous if-statements:

function isOdd(number) {
  if (number % 2 === 1) {
    return true;
  } else {
    return false;
  }
}

Can you spot the biggest issue with the isOdd function above?

Unnecessary if-statement. Similar code:

function isOdd(number) {
  return (number % 2 === 1);
};

11) Making remarks on things that are obvious

I've learnt to avoid comments. Most code comments can be renamed.

instead of:

// This function sums only odd numbers in an array
const sum = (val) => {
  return val.reduce((a, b) => {
    if (b % 2 === 1) { // If the current number is odd
      a+=b;            // Add current number to accumulator
    }
    return a;          // The accumulator
  }, 0);
};

Commentless code looks like this:

const sumOddValues = (array) => {
  return array.reduce((accumulator, currentNumber) => {
    if (isOdd(currentNumber)) { 
      return accumulator + currentNumber;
    }
    return accumulator;
  }, 0);
};

Better function and argument names eliminate most comments. Remember that before commenting.

Sometimes you have to use comments to clarify the code. This is when your comments should answer WHY this code rather than WHAT it does.

Do not write a WHAT remark to clarify the code. Here are some unnecessary comments that clutter code:

// create a variable and initialize it to 0
let sum = 0;
// Loop over array
array.forEach(
  // For each number in the array
  (number) => {
    // Add the current number to the sum variable
    sum += number;
  }
);

Avoid that programmer. Reject that code. Remove such comments if necessary. Most importantly, teach programmers how awful these remarks are. Tell programmers who publish remarks like this that they may lose their jobs. That terrible.

12) Skipping tests

I'll simplify. If you develop code without tests because you think you're an excellent programmer, you're a rookie.

If you're not writing tests in code, you're probably testing manually. Every few lines of code in a web application will be refreshed and interacted with. Also. Manual code testing is fine. To learn how to automatically test your code, manually test it. After testing your application, return to your code editor and write code to automatically perform the same interaction the next time you add code.

Human. After each code update, you will forget to test all successful validations. Automate it!

Before writing code to fulfill validations, guess or design them. TDD is real. It improves your feature design thinking.

If you can use TDD, even partially, do so.

13) Making the assumption that if something is working, it must be right.

See this sumOddValues function. Is it flawed?

const sumOddValues = (array) => {
  return array.reduce((accumulator, currentNumber) => {
    if (currentNumber % 2 === 1) { 
      return accumulator + currentNumber;
    }
    return accumulator;
  });
};
 
 
console.assert(
  sumOddValues([1, 2, 3, 4, 5]) === 9
);

Verified. Good life. Correct?

Code above is incomplete. It handles some scenarios correctly, including the assumption used, but it has many other issues. I'll list some:

#1: No empty input handling. What happens when the function is called without arguments? That results in an error revealing the function's implementation:

TypeError: Cannot read property 'reduce' of undefined.

Two main factors indicate faulty code.

  • Your function's users shouldn't come across implementation-related information.

  • The user cannot benefit from the error. Simply said, they were unable to use your function. They would be aware that they misused the function if the error was more obvious about the usage issue. You might decide to make the function throw a custom exception, for instance:

TypeError: Cannot execute function for empty list.

Instead of returning an error, your method should disregard empty input and return a sum of 0. This case requires action.

Problem #2: No input validation. What happens if the function is invoked with a text, integer, or object instead of an array?

The function now throws:

sumOddValues(42);
TypeError: array.reduce is not a function

Unfortunately, array. cut's a function!

The function labels anything you call it with (42 in the example above) as array because we named the argument array. The error says 42.reduce is not a function.

See how that error confuses? An mistake like:

TypeError: 42 is not an array, dude.

Edge-cases are #1 and #2. These edge-cases are typical, but you should also consider less obvious ones. Negative numbers—what happens?

sumOddValues([1, 2, 3, 4, 5, -13]) // => still 9

-13's unusual. Is this the desired function behavior? Error? Should it sum negative numbers? Should it keep ignoring negative numbers? You may notice the function should have been titled sumPositiveOddNumbers.

This decision is simple. The more essential point is that if you don't write a test case to document your decision, future function maintainers won't know if you ignored negative values intentionally or accidentally.

It’s not a bug. It’s a feature. — Someone who forgot a test case

#3: Valid cases are not tested. Forget edge-cases, this function mishandles a straightforward case:

sumOddValues([2, 1, 3, 4, 5]) // => 11

The 2 above was wrongly included in sum.

The solution is simple: reduce accepts a second input to initialize the accumulator. Reduce will use the first value in the collection as the accumulator if that argument is not provided, like in the code above. The sum included the test case's first even value.

This test case should have been included in the tests along with many others, such as all-even numbers, a list with 0 in it, and an empty list.

Newbie code also has rudimentary tests that disregard edge-cases.

14) Adhering to Current Law

Unless you're a lone supercoder, you'll encounter stupid code. Beginners don't identify it and assume it's decent code because it works and has been in the codebase for a while.

Worse, if the terrible code uses bad practices, the newbie may be enticed to use them elsewhere in the codebase since they learnt them from good code.

A unique condition may have pushed the developer to write faulty code. This is a nice spot for a thorough note that informs newbies about that condition and why the code is written that way.

Beginners should presume that undocumented code they don't understand is bad. Ask. Enquire. Blame it!

If the code's author is dead or can't remember it, research and understand it. Only after understanding the code can you judge its quality. Before that, presume nothing.

15) Being fixated on best practices

Best practices damage. It suggests no further research. Best practice ever. No doubts!

No best practices. Today's programming language may have good practices.

Programming best practices are now considered bad practices.

Time will reveal better methods. Focus on your strengths, not best practices.

Do not do anything because you read a quote, saw someone else do it, or heard it is a recommended practice. This contains all my article advice! Ask questions, challenge theories, know your options, and make informed decisions.

16) Being preoccupied with performance

Premature optimization is the root of all evil (or at least most of it) in programming — Donald Knuth (1974)

I think Donald Knuth's advice is still relevant today, even though programming has changed.

Do not optimize code if you cannot measure the suspected performance problem.

Optimizing before code execution is likely premature. You may possibly be wasting time optimizing.

There are obvious optimizations to consider when writing new code. You must not flood the event loop or block the call stack in Node.js. Remember this early optimization. Will this code block the call stack?

Avoid non-obvious code optimization without measurements. If done, your performance boost may cause new issues.

Stop optimizing unmeasured performance issues.

17) Missing the End-User Experience as a Goal

How can an app add a feature easily? Look at it from your perspective or in the existing User Interface. Right? Add it to the form if the feature captures user input. Add it to your nested menu of links if it adds a link to a page.

Avoid that developer. Be a professional who empathizes with customers. They imagine this feature's consumers' needs and behavior. They focus on making the feature easy to find and use, not just adding it to the software.

18) Choosing the incorrect tool for the task

Every programmer has their preferred tools. Most tools are good for one thing and bad for others.

The worst tool for screwing in a screw is a hammer. Do not use your favorite hammer on a screw. Don't use Amazon's most popular hammer on a screw.

A true beginner relies on tool popularity rather than problem fit.

You may not know the best tools for a project. You may know the best tool. However, it wouldn't rank high. You must learn your tools and be open to new ones.

Some coders shun new tools. They like their tools and don't want to learn new ones. I can relate, but it's wrong.

You can build a house slowly with basic tools or rapidly with superior tools. You must learn and use new tools.

19) Failing to recognize that data issues are caused by code issues

Programs commonly manage data. The software will add, delete, and change records.

Even the simplest programming errors can make data unpredictable. Especially if the same defective application validates all data.

Code-data relationships may be confusing for beginners. They may employ broken code in production since feature X is not critical. Buggy coding may cause hidden data integrity issues.

Worse, deploying code that corrected flaws without fixing minor data problems caused by these defects will only collect more data problems that take the situation into the unrecoverable-level category.

How do you avoid these issues? Simply employ numerous data integrity validation levels. Use several interfaces. Front-end, back-end, network, and database validations. If not, apply database constraints.

Use all database constraints when adding columns and tables:

  • If a column has a NOT NULL constraint, null values will be rejected for that column. If your application expects that field has a value, your database should designate its source as not null.

  • If a column has a UNIQUE constraint, the entire table cannot include duplicate values for that column. This is ideal for a username or email field on a Users table, for instance.

  • For the data to be accepted, a CHECK constraint, or custom expression, must evaluate to true. For instance, you can apply a check constraint to ensure that the values of a normal % column must fall within the range of 0 and 100.

  • With a PRIMARY KEY constraint, the values of the columns must be both distinct and not null. This one is presumably what you're utilizing. To distinguish the records in each table, the database needs have a primary key.

  • A FOREIGN KEY constraint requires that the values in one database column, typically a primary key, match those in another table column.

Transaction apathy is another data integrity issue for newbies. If numerous actions affect the same data source and depend on each other, they must be wrapped in a transaction that can be rolled back if one fails.

20) Reinventing the Wheel

Tricky. Some programming wheels need reinvention. Programming is undefined. New requirements and changes happen faster than any team can handle.

Instead of modifying the wheel we all adore, maybe we should rethink it if you need a wheel that spins at varied speeds depending on the time of day. If you don't require a non-standard wheel, don't reinvent it. Use the darn wheel.

Wheel brands can be hard to choose from. Research and test before buying! Most software wheels are free and transparent. Internal design quality lets you evaluate coding wheels. Try open-source wheels. Debug and fix open-source software simply. They're easily replaceable. In-house support is also easy.

If you need a wheel, don't buy a new automobile and put your maintained car on top. Do not include a library to use a few functions. Lodash in JavaScript is the finest example. Import shuffle to shuffle an array. Don't import lodash.

21) Adopting the incorrect perspective on code reviews

Beginners often see code reviews as criticism. Dislike them. Not appreciated. Even fear them.

Incorrect. If so, modify your mindset immediately. Learn from every code review. Salute them. Observe. Most crucial, thank reviewers who teach you.

Always learning code. Accept it. Most code reviews teach something new. Use these for learning.

You may need to correct the reviewer. If your code didn't make that evident, it may need to be changed. If you must teach your reviewer, remember that teaching is one of the most enjoyable things a programmer can do.

22) Not Using Source Control

Newbies often underestimate Git's capabilities.

Source control is more than sharing your modifications. It's much bigger. Clear history is source control. The history of coding will assist address complex problems. Commit messages matter. They are another way to communicate your implementations, and utilizing them with modest commits helps future maintainers understand how the code got where it is.

Commit early and often with present-tense verbs. Summarize your messages but be detailed. If you need more than a few lines, your commit is too long. Rebase!

Avoid needless commit messages. Commit summaries should not list new, changed, or deleted files. Git commands can display that list from the commit object. The summary message would be noise. I think a big commit has many summaries per file altered.

Source control involves discoverability. You can discover the commit that introduced a function and see its context if you doubt its need or design. Commits can even pinpoint which code caused a bug. Git has a binary search within commits (bisect) to find the bug-causing commit.

Source control can be used before commits to great effect. Staging changes, patching selectively, resetting, stashing, editing, applying, diffing, reversing, and others enrich your coding flow. Know, use, and enjoy them.

I consider a Git rookie someone who knows less functionalities.

23) Excessive Use of Shared State

Again, this is not about functional programming vs. other paradigms. That's another article.

Shared state is problematic and should be avoided if feasible. If not, use shared state as little as possible.

As a new programmer, I didn't know that all variables represent shared states. All variables in the same scope can change its data. Global scope reduces shared state span. Keep new states in limited scopes and avoid upward leakage.

When numerous resources modify common state in the same event loop tick, the situation becomes severe (in event-loop-based environments). Races happen.

This shared state race condition problem may encourage a rookie to utilize a timer, especially if they have a data lock issue. Red flag. No. Never accept it.

24) Adopting the Wrong Mentality Toward Errors

Errors are good. Progress. They indicate a simple way to improve.

Expert programmers enjoy errors. Newbies detest them.

If these lovely red error warnings irritate you, modify your mindset. Consider them helpers. Handle them. Use them to advance.

Some errors need exceptions. Plan for user-defined exceptions. Ignore some mistakes. Crash and exit the app.

25) Ignoring rest periods

Humans require mental breaks. Take breaks. In the zone, you'll forget breaks. Another symptom of beginners. No compromises. Make breaks mandatory in your process. Take frequent pauses. Take a little walk to plan your next move. Reread the code.

This has been a long post. You deserve a break.