Expulsion of ten million Ukrainians
According to recent data from two UN agencies, ten million Ukrainians have been displaced.
The International Organization for Migration (IOM) estimates nearly 6.5 million Ukrainians have relocated. Most have fled the war zones around Kyiv and eastern Ukraine, including Dnipro, Zhaporizhzhia, and Kharkiv. Most IDPs have fled to western and central Ukraine.
Since Russia invaded on Feb. 24, 3.6 million people have crossed the border to seek refuge in neighboring countries, according to the latest UN data. While most refugees have fled to Poland and Romania, many have entered Russia.
Internally displaced figures are IOM estimates as of March 19, based on 2,000 telephone interviews with Ukrainians aged 18 and older conducted between March 9-16. The UNHCR compiled the figures for refugees to neighboring countries on March 21 based on official border crossing data and its own estimates. The UNHCR's top-line total is lower than the country totals because Romania and Moldova totals include people crossing between the two countries.
Sources: IOM, UNHCR
According to IOM estimates based on telephone interviews with a representative sample of internally displaced Ukrainians, over 53% of those displaced are women, and over 60% of displaced households have children.
More on Current Events

Jared A. Brock
3 years ago
Here is the actual reason why Russia invaded Ukraine
Democracy's demise
Our Ukrainian brothers and sisters are being attacked by a far superior force.
It's the biggest invasion since WWII.
43.3 million peaceful Ukrainians awoke this morning to tanks, mortars, and missiles. Russia is already 15 miles away.
America and the West will not deploy troops.
They're sanctioning. Except railways. And luxuries. And energy. Diamonds. Their dependence on Russian energy exports means they won't even cut Russia off from SWIFT.
Ukraine is desperate enough to hand out guns on the street.
France, Austria, Turkey, and the EU are considering military aid, but Ukraine will fall without America or NATO.
The Russian goal is likely to encircle Kyiv and topple Zelenskyy's government. A proxy power will be reinstated once Russia has total control.
“Western security services believe Putin intends to overthrow the government and install a puppet regime,” says Financial Times foreign affairs commentator Gideon Rachman. This “decapitation” strategy includes municipalities. Ukrainian officials are being targeted for arrest or death.”
Also, Putin has never lost a war.
Why is Russia attacking Ukraine?
Putin, like a snowflake college student, “feels unsafe.”
Why?
Because Ukraine is full of “Nazi ideas.”
Putin claims he has felt threatened by Ukraine since the country's pro-Putin leader was ousted and replaced by a popular Jewish comedian.
Hee hee
He fears a full-scale enemy on his doorstep if Ukraine joins NATO. But he refuses to see it both ways. NATO has never invaded Russia, but Russia has always stolen land from its neighbors. Can you blame them for joining a mutual defense alliance when a real threat exists?
Nations that feel threatened can join NATO. That doesn't justify an attack by Russia. It allows them to defend themselves. But NATO isn't attacking Moscow. They aren't.
Russian President Putin's "special operation" aims to de-Nazify the Jewish-led nation.
To keep Crimea and the other two regions he has already stolen, he wants Ukraine undefended by NATO.
(Warlords have fought for control of the strategically important Crimea for over 2,000 years.)
Putin wants to own all of Ukraine.
Why?
The Black Sea is his goal.
Ports bring money and power, and Ukraine pipelines transport Russian energy products.
Putin wants their wheat, too — with 70% crop coverage, Ukraine would be their southern breadbasket, and Russia has no qualms about starving millions of Ukrainians to death to feed its people.
In the end, it's all about greed and power.
Putin wants to own everything Russia has ever owned. This year he turns 70, and he wants to be remembered like his hero Peter the Great.
In order to get it, he's willing to kill thousands of Ukrainians
Art imitates life
This story began when a Jewish TV comedian portrayed a teacher elected President after ranting about corruption.
Servant of the People, the hit sitcom, is now the leading centrist political party.
Right, President Zelenskyy won the hearts and minds of Ukrainians by imagining a fairer world.
A fair fight is something dictators, corporatists, monopolists, and warlords despise.
Now Zelenskyy and his people will die, allowing one of history's most corrupt leaders to amass even more power.
The poor always lose
Meanwhile, the West will impose economic sanctions on Russia.
China is likely to step in to help Russia — or at least the wealthy.
The poor and working class in Russia will suffer greatly if there is a hard crash or long-term depression.
Putin's friends will continue to drink champagne and eat caviar.
Russia cutting off oil, gas, and fertilizer could cause more inflation and possibly a recession if it cuts off supplies to the West. This causes more suffering and hardship for the Western poor and working class.
Why? a billionaire sociopath gets his dirt.
Yes, Russia is simply copying America. Some of us think all war is morally wrong, regardless of who does it.
But let's not kid ourselves right now.
The markets rallied after the biggest invasion in Europe since WWII.
Investors hope Ukraine collapses and Russian oil flows.
Unbridled capitalists value lifeless.
What we can do about Ukraine
When the Russian army invaded eastern Finland, my wife's grandmother fled as a child. 80 years later, Russia still has Karelia.
Russia invaded Ukraine today to retake two eastern provinces.
History has taught us nothing.
Past mistakes won't fix the future.
Instead, we should try:
- Pray and/or meditate on our actions with our families.
- Stop buying Russian products (vodka, obviously, but also pay more for hydro/solar/geothermal/etc.)
- Stop wasting money on frivolous items and donate it to Ukrainian charities.
Here are 35+ places to donate.
- To protest, gather a few friends, contact the media, and shake signs in front of the Russian embassy.
- Prepare to welcome refugees.
More war won't save the planet or change hearts.
Only love can work.

Scott Galloway
3 years ago
Text-ure
While we played checkers, we thought billionaires played 3D chess. They're playing the same game on a fancier board.
Every medium has nuances and norms. Texting is authentic and casual. A smaller circle has access, creating intimacy and immediacy. Most people read all their texts, but not all their email and mail. Many of us no longer listen to our voicemails, and calling your kids ages you.
Live interviews and testimony under oath inspire real moments, rare in a world where communications departments sanitize everything powerful people say. When (some of) Elon's text messages became public in Twitter v. Musk, we got a glimpse into tech power. It's bowels.
These texts illuminate the tech community's upper caste.
Checkers, Not Chess
Elon texts with Larry Ellison, Joe Rogan, Sam Bankman-Fried, Satya Nadella, and Jack Dorsey. They reveal astounding logic, prose, and discourse. The world's richest man and his followers are unsophisticated, obtuse, and petty. Possibly. While we played checkers, we thought billionaires played 3D chess. They're playing the same game on a fancier board.
They fumble with their computers.
They lean on others to get jobs for their kids (no surprise).
No matter how rich, they always could use more (money).
Differences A social hierarchy exists. Among this circle, the currency of deference is... currency. Money increases sycophantry. Oculus and Elon's "friends'" texts induce nausea.
Autocorrect frustrates everyone.
Elon doesn't stand out to me in these texts; he comes off mostly OK in my view. It’s the people around him. It seems our idolatry of innovators has infected the uber-wealthy, giving them an uncontrollable urge to kill the cool kid for a seat at his cafeteria table. "I'd grenade for you." If someone says this and they're not fighting you, they're a fan, not a friend.
Many powerful people are undone by their fake friends. Facilitators, not well-wishers. When Elon-Twitter started, I wrote about power. Unchecked power is intoxicating. This is a scientific fact, not a thesis. Power causes us to downplay risk, magnify rewards, and act on instincts more quickly. You lose self-control and must rely on others.
You'd hope the world's richest person has advisers who push back when necessary (i.e., not yes men). Elon's reckless, childish behavior and these texts show there is no truth-teller. I found just one pushback in the 151-page document. It came from Twitter CEO Parag Agrawal, who, in response to Elon’s unhelpful “Is Twitter dying?” tweet, let Elon know what he thought: It was unhelpful. Elon’s response? A childish, terse insult.
Scale
The texts are mostly unremarkable. There are some, however, that do remind us the (super-)rich are different. Specifically, the discussions of possible equity investments from crypto-billionaire Sam Bankman-Fried (“Does he have huge amounts of money?”) and this exchange with Larry Ellison:
Ellison, who co-founded $175 billion Oracle, is wealthy. Less clear is whether he can text a billion dollars. Who hasn't been texted $1 billion? Ellison offered 8,000 times the median American's net worth, enough to buy 3,000 Ferraris or the Chicago Blackhawks. It's a bedrock principle of capitalism to have incredibly successful people who are exponentially wealthier than the rest of us. It creates an incentive structure that inspires productivity and prosperity. When people offer billions over text to help a billionaire's vanity project in a country where 1 in 5 children are food insecure, isn't America messed up?
Elon's Morgan Stanley banker, Michael Grimes, tells him that Web3 ventures investor Bankman-Fried can invest $5 billion in the deal: “could do $5bn if everything vision lock... Believes in your mission." The message bothers Elon. In Elon's world, $5 billion doesn't warrant a worded response. $5 billion is more than many small nations' GDP, twice the SEC budget, and five times the NRC budget.
If income inequality worries you after reading this, trust your gut.
Billionaires aren't like the rich.
As an entrepreneur, academic, and investor, I've met modest-income people, rich people, and billionaires. Rich people seem different to me. They're smarter and harder working than most Americans. Monty Burns from The Simpsons is a cartoon about rich people. Rich people have character and know how to make friends. Success requires supporters.
I've never noticed a talent or intelligence gap between wealthy and ultra-wealthy people. Conflating talent and luck infects the tech elite. Timing is more important than incremental intelligence when going from millions to hundreds of millions or billions. Proof? Elon's texting. Any man who electrifies the auto industry and lands two rockets on barges is a genius. His mega-billions come from a well-regulated capital market, enforceable contracts, thousands of workers, and billions of dollars in government subsidies, including a $465 million DOE loan that allowed Tesla to produce the Model S. So, is Mr. Musk a genius or an impressive man in a unique time and place?
The Point
Elon's texts taught us more? He can't "fix" Twitter. For two weeks in April, he was all in on blockchain Twitter, brainstorming Dogecoin payments for tweets with his brother — i.e., paid speech — while telling Twitter's board he was going to make a hostile tender offer. Kimbal approved. By May, he was over crypto and "laborious blockchain debates." (Mood.)
Elon asked the Twitter CEO for "an update from the Twitter engineering team" No record shows if he got the meeting. It doesn't "fix" Twitter either. And this is Elon's problem. He's a grown-up child with all the toys and no boundaries. His yes-men encourage his most facile thoughts, and shitposts and errant behavior diminish his genius and ours.
Post-Apocalyptic
The universe's titans have a sense of humor.
Every day, we must ask: Who keeps me real? Who will disagree with me? Who will save me from my psychosis, which has brought down so many successful people? Elon Musk doesn't need anyone to jump on a grenade for him; he needs to stop throwing them because one will explode in his hand.

B Kean
3 years ago
Russia's greatest fear is that no one will ever fear it again.
When everyone laughs at him, he's powerless.
1-2-3: Fold your hands and chuckle heartily. Repeat until you're really laughing.
We're laughing at Russia's modern-day shortcomings, if you hadn't guessed.
Watch Good Fellas' laughing scene on YouTube. Ray Liotta, Joe Pesci, and others laugh hysterically in a movie. Laugh at that scene, then think of Putin's macho guy statement on February 24 when he invaded Ukraine. It's cathartic to laugh at his expense.
Right? It makes me feel great that he was convinced the military action will be over in a week. I love reading about Putin's morning speech. Many stupid people on Earth supported him. Many loons hailed his speech historic.
Russia preys on the weak. Strong Ukraine overcame Russia. Ukraine's right. As usual, Russia is in the wrong.
A so-called thought leader recently complained on Russian TV that the West no longer fears Russia, which is why Ukraine is kicking Russia's ass.
Let's simplify for this Russian intellectual. Except for nuclear missiles, the West has nothing to fear from Russia. Russia is a weak, morally-empty country whose DNA has degraded to the point that evolution is already working to flush it out.
The West doesn't fear Russia since he heads a prominent Russian institution. Russian universities are intellectually barren. I taught at St. Petersburg University till June (since February I was virtually teaching) and was astounded by the lack of expertise.
Russians excel in science, math, engineering, IT, and anything that doesn't demand critical thinking or personal ideas.
Reflecting on many of the high-ranking individuals from around the West, Satanovsky said: “They are not interested in us. We only think we’re ‘big politics’ for them but for those guys we’re small politics. “We’re small politics, even though we think of ourselves as the descendants of the Russian Empire, of the USSR. We are not the Soviet Union, we don’t have enough weirdos and lunatics, we practically don’t have any (U.S. Has Stopped Fearing Us).”
Professor Dmitry Evstafiev, president of the Institute of the Middle East, praised Nikita Khrushchev's fiery nature because he made the world fear him, which made the Soviet Union great. If the world believes Putin is crazy, then Russia will be great, says this man. This is crazy.
Evstafiev covered his cowardice by saluting Putin. He praised his culture and Ukraine patience. This weakling professor ingratiates himself to Putin instead of calling him a cowardly, demonic shithead.
This is why we don't fear Russia, professor. Because you're all sycophantic weaklings who sold your souls to a Leningrad narcissist. Putin's nothing. He lacks intelligence. You've tied your country's fate and youth's future to this terrible monster. Disgraceful!
How can you loathe your country's youth so much to doom them to decades or centuries of ignominy? My son is half Russian and must now live with this portion of him.
We don't fear Russia because you don't realize that it should be appreciated, not frightened. That would need lobotomizing tens of millions of people like you.
Sadman. You let a Leningrad weakling castrate you and display your testicles. He shakes the container, saying, "Your balls are mine."
Why is Russia not feared?
Your self-inflicted national catastrophe is hilarious. Sadly, it's laugh-through-tears.
You might also like

Dmitrii Eliuseev
2 years ago
Creating Images on Your Local PC Using Stable Diffusion AI
Deep learning-based generative art is being researched. As usual, self-learning is better. Some models, like OpenAI's DALL-E 2, require registration and can only be used online, but others can be used locally, which is usually more enjoyable for curious users. I'll demonstrate the Stable Diffusion model's operation on a standard PC.
Let’s get started.
What It Does
Stable Diffusion uses numerous components:
A generative model trained to produce images is called a diffusion model. The model is incrementally improving the starting data, which is only random noise. The model has an image, and while it is being trained, the reversed process is being used to add noise to the image. Being able to reverse this procedure and create images from noise is where the true magic is (more details and samples can be found in the paper).
An internal compressed representation of a latent diffusion model, which may be altered to produce the desired images, is used (more details can be found in the paper). The capacity to fine-tune the generation process is essential because producing pictures at random is not very attractive (as we can see, for instance, in Generative Adversarial Networks).
A neural network model called CLIP (Contrastive Language-Image Pre-training) is used to translate natural language prompts into vector representations. This model, which was trained on 400,000,000 image-text pairs, enables the transformation of a text prompt into a latent space for the diffusion model in the scenario of stable diffusion (more details in that paper).
This figure shows all data flow:
The weights file size for Stable Diffusion model v1 is 4 GB and v2 is 5 GB, making the model quite huge. The v1 model was trained on 256x256 and 512x512 LAION-5B pictures on a 4,000 GPU cluster using over 150.000 NVIDIA A100 GPU hours. The open-source pre-trained model is helpful for us. And we will.
Install
Before utilizing the Python sources for Stable Diffusion v1 on GitHub, we must install Miniconda (assuming Git and Python are already installed):
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh
chmod +x Miniconda3-py39_4.12.0-Linux-x86_64.sh
./Miniconda3-py39_4.12.0-Linux-x86_64.sh
conda update -n base -c defaults condaInstall the source and prepare the environment:
git clone https://github.com/CompVis/stable-diffusion
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
pip3 install transformers --upgradeDownload the pre-trained model weights next. HiggingFace has the newest checkpoint sd-v14.ckpt (a download is free but registration is required). Put the file in the project folder and have fun:
python3 scripts/txt2img.py --prompt "hello world" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1Almost. The installation is complete for happy users of current GPUs with 12 GB or more VRAM. RuntimeError: CUDA out of memory will occur otherwise. Two solutions exist.
Running the optimized version
Try optimizing first. After cloning the repository and enabling the environment (as previously), we can run the command:
python3 optimizedSD/optimized_txt2img.py --prompt "hello world" --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1Stable Diffusion worked on my visual card with 8 GB RAM (alas, I did not behave well enough to get NVIDIA A100 for Christmas, so 8 GB GPU is the maximum I have;).
Running Stable Diffusion without GPU
If the GPU does not have enough RAM or is not CUDA-compatible, running the code on a CPU will be 20x slower but better than nothing. This unauthorized CPU-only branch from GitHub is easiest to obtain. We may easily edit the source code to use the latest version. It's strange that a pull request for that was made six months ago and still hasn't been approved, as the changes are simple. Readers can finish in 5 minutes:
Replace if attr.device!= torch.device(cuda) with if attr.device!= torch.device(cuda) and torch.cuda.is available at line 20 of ldm/models/diffusion/ddim.py ().
Replace if attr.device!= torch.device(cuda) with if attr.device!= torch.device(cuda) and torch.cuda.is available in line 20 of ldm/models/diffusion/plms.py ().
Replace device=cuda in lines 38, 55, 83, and 142 of ldm/modules/encoders/modules.py with device=cuda if torch.cuda.is available(), otherwise cpu.
Replace model.cuda() in scripts/txt2img.py line 28 and scripts/img2img.py line 43 with if torch.cuda.is available(): model.cuda ().
Run the script again.
Testing
Test the model. Text-to-image is the first choice. Test the command line example again:
python3 scripts/txt2img.py --prompt "hello world" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1The slow generation takes 10 seconds on a GPU and 10 minutes on a CPU. Final image:
Hello world is dull and abstract. Try a brush-wielding hamster. Why? Because we can, and it's not as insane as Napoleon's cat. Another image:
Generating an image from a text prompt and another image is interesting. I made this picture in two minutes using the image editor (sorry, drawing wasn't my strong suit):
I can create an image from this drawing:
python3 scripts/img2img.py --prompt "A bird is sitting on a tree branch" --ckpt sd-v1-4.ckpt --init-img bird.png --strength 0.8It was far better than my initial drawing:
I hope readers understand and experiment.
Stable Diffusion UI
Developers love the command line, but regular users may struggle. Stable Diffusion UI projects simplify image generation and installation. Simple usage:
Unpack the ZIP after downloading it from https://github.com/cmdr2/stable-diffusion-ui/releases. Linux and Windows are compatible with Stable Diffusion UI (sorry for Mac users, but those machines are not well-suitable for heavy machine learning tasks anyway;).
Start the script.
Done. The web browser UI makes configuring various Stable Diffusion features (upscaling, filtering, etc.) easy:
V2.1 of Stable Diffusion
I noticed the notification about releasing version 2.1 while writing this essay, and it was intriguing to test it. First, compare version 2 to version 1:
alternative text encoding. The Contrastive LanguageImage Pre-training (CLIP) deep learning model, which was trained on a significant number of text-image pairs, is used in Stable Diffusion 1. The open-source CLIP implementation used in Stable Diffusion 2 is called OpenCLIP. It is difficult to determine whether there have been any technical advancements or if legal concerns were the main focus. However, because the training datasets for the two text encoders were different, the output results from V1 and V2 will differ for the identical text prompts.
a new depth model that may be used to the output of image-to-image generation.
a revolutionary upscaling technique that can quadruple the resolution of an image.
Generally higher resolution Stable Diffusion 2 has the ability to produce both 512x512 and 768x768 pictures.
The Hugging Face website offers a free online demo of Stable Diffusion 2.1 for code testing. The process is the same as for version 1.4. Download a fresh version and activate the environment:
conda deactivate
conda env remove -n ldm # Use this if version 1 was previously installed
git clone https://github.com/Stability-AI/stablediffusion
cd stablediffusion
conda env create -f environment.yaml
conda activate ldmHugging Face offers a new weights ckpt file.
The Out of memory error prevented me from running this version on my 8 GB GPU. Version 2.1 fails on CPUs with the slow conv2d cpu not implemented for Half error (according to this GitHub issue, the CPU support for this algorithm and data type will not be added). The model can be modified from half to full precision (float16 instead of float32), however it doesn't make sense since v1 runs up to 10 minutes on the CPU and v2.1 should be much slower. The online demo results are visible. The same hamster painting with a brush prompt yielded this result:
It looks different from v1, but it functions and has a higher resolution.
The superresolution.py script can run the 4x Stable Diffusion upscaler locally (the x4-upscaler-ema.ckpt weights file should be in the same folder):
python3 scripts/gradio/superresolution.py configs/stable-diffusion/x4-upscaling.yaml x4-upscaler-ema.ckptThis code allows the web browser UI to select the image to upscale:
The copy-paste strategy may explain why the upscaler needs a text prompt (and the Hugging Face code snippet does not have any text input as well). I got a GPU out of memory error again, although CUDA can be disabled like v1. However, processing an image for more than two hours is unlikely:
Stable Diffusion Limitations
When we use the model, it's fun to see what it can and can't do. Generative models produce abstract visuals but not photorealistic ones. This fundamentally limits The generative neural network was trained on text and image pairs, but humans have a lot of background knowledge about the world. The neural network model knows nothing. If someone asks me to draw a Chinese text, I can draw something that looks like Chinese but is actually gibberish because I never learnt it. Generative AI does too! Humans can learn new languages, but the Stable Diffusion AI model includes only language and image decoder brain components. For instance, the Stable Diffusion model will pull NO WAR banner-bearers like this:
V1:
V2.1:
The shot shows text, although the model never learned to read or write. The model's string tokenizer automatically converts letters to lowercase before generating the image, so typing NO WAR banner or no war banner is the same.
I can also ask the model to draw a gorgeous woman:
V1:
V2.1:
The first image is gorgeous but physically incorrect. A second one is better, although it has an Uncanny valley feel. BTW, v2 has a lifehack to add a negative prompt and define what we don't want on the image. Readers might try adding horrible anatomy to the gorgeous woman request.
If we ask for a cartoon attractive woman, the results are nice, but accuracy doesn't matter:
V1:
V2.1:
Another example: I ordered a model to sketch a mouse, which looks beautiful but has too many legs, ears, and fingers:
V1:
V2.1: improved but not perfect.
V1 produces a fun cartoon flying mouse if I want something more abstract:
I tried multiple times with V2.1 but only received this:
The image is OK, but the first version is closer to the request.
Stable Diffusion struggles to draw letters, fingers, etc. However, abstract images yield interesting outcomes. A rural landscape with a modern metropolis in the background turned out well:
V1:
V2.1:
Generative models help make paintings too (at least, abstract ones). I searched Google Image Search for modern art painting to see works by real artists, and this was the first image:
I typed "abstract oil painting of people dancing" and got this:
V1:
V2.1:
It's a different style, but I don't think the AI-generated graphics are worse than the human-drawn ones.
The AI model cannot think like humans. It thinks nothing. A stable diffusion model is a billion-parameter matrix trained on millions of text-image pairs. I input "robot is creating a picture with a pen" to create an image for this post. Humans understand requests immediately. I tried Stable Diffusion multiple times and got this:
This great artwork has a pen, robot, and sketch, however it was not asked. Maybe it was because the tokenizer deleted is and a words from a statement, but I tried other requests such robot painting picture with pen without success. It's harder to prompt a model than a person.
I hope Stable Diffusion's general effects are evident. Despite its limitations, it can produce beautiful photographs in some settings. Readers who want to use Stable Diffusion results should be warned. Source code examination demonstrates that Stable Diffusion images feature a concealed watermark (text StableDiffusionV1 and SDV2) encoded using the invisible-watermark Python package. It's not a secret, because the official Stable Diffusion repository's test watermark.py file contains a decoding snippet. The put watermark line in the txt2img.py source code can be removed if desired. I didn't discover this watermark on photographs made by the online Hugging Face demo. Maybe I did something incorrectly (but maybe they are just not using the txt2img script on their backend at all).
Conclusion
The Stable Diffusion model was fascinating. As I mentioned before, trying something yourself is always better than taking someone else's word, so I encourage readers to do the same (including this article as well;).
Is Generative AI a game-changer? My humble experience tells me:
I think that place has a lot of potential. For designers and artists, generative AI can be a truly useful and innovative tool. Unfortunately, it can also pose a threat to some of them since if users can enter a text field to obtain a picture or a website logo in a matter of clicks, why would they pay more to a different party? Is it possible right now? unquestionably not yet. Images still have a very poor quality and are erroneous in minute details. And after viewing the image of the stunning woman above, models and fashion photographers may also unwind because it is highly unlikely that AI will replace them in the upcoming years.
Today, generative AI is still in its infancy. Even 768x768 images are considered to be of a high resolution when using neural networks, which are computationally highly expensive. There isn't an AI model that can generate high-resolution photographs natively without upscaling or other methods, at least not as of the time this article was written, but it will happen eventually.
It is still a challenge to accurately represent knowledge in neural networks (information like how many legs a cat has or the year Napoleon was born). Consequently, AI models struggle to create photorealistic photos, at least where little details are important (on the other side, when I searched Google for modern art paintings, the results are often even worse;).
When compared to the carefully chosen images from official web pages or YouTube reviews, the average output quality of a Stable Diffusion generation process is actually less attractive because to its high degree of randomness. When using the same technique on their own, consumers will theoretically only view those images as 1% of the results.
Anyway, it's exciting to witness this area's advancement, especially because the project is open source. Google's Imagen and DALL-E 2 can also produce remarkable findings. It will be interesting to see how they progress.

Ari Joury, PhD
3 years ago
7 ways to turn into a major problem-solver
For some people, the glass is half empty. For others, it’s half full. And for some, the question is, How do I get this glass totally full again?
Problem-solvers are the last group. They're neutral. Pragmatists.
Problems surround them. They fix things instead of judging them. Problem-solvers improve the world wherever they go.
Some fail. Sometimes their good intentions have terrible results. Like when they try to help a grandma cross the road because she can't do it alone but discover she never wanted to.
Most programmers, software engineers, and data scientists solve problems. They use computer code to fix problems they see.
Coding is best done by understanding and solving the problem.
Despite your best intentions, building the wrong solution may have negative consequences. Helping an unwilling grandma cross the road.
How can you improve problem-solving?
1. Examine your presumptions.
Don’t think There’s a grandma, and she’s unable to cross the road. Therefore I must help her over the road. Instead think This grandma looks unable to cross the road. Let’s ask her whether she needs my help to cross it.
Maybe the grandma can’t cross the road alone, but maybe she can. You can’t tell for sure just by looking at her. It’s better to ask.
Maybe the grandma wants to cross the road. But maybe she doesn’t. It’s better to ask!
Building software is similar. Do only I find this website ugly? Who can I consult?
We all have biases, mental shortcuts, and worldviews. They simplify life.
Problem-solving requires questioning all assumptions. They might be wrong!
Think less. Ask more.
Secondly, fully comprehend the issue.
Grandma wants to cross the road? Does she want flowers from the shop across the street?
Understanding the problem advances us two steps. Instead of just watching people and their challenges, try to read their intentions.
Don't ask, How can I help grandma cross the road? Why would this grandma cross the road? What's her goal?
Understand what people want before proposing solutions.
3. Request more information. This is not a scam!
People think great problem solvers solve problems immediately. False!
Problem-solvers study problems. Understanding the problem makes solving it easy.
When you see a grandma struggling to cross the road, you want to grab her elbow and pull her over. However, a good problem solver would ask grandma what she wants. So:
Problem solver: Excuse me, ma’am? Do you wish to get over the road? Grandma: Yes indeed, young man! Thanks for asking. Problem solver: What do you want to do on the other side? Grandma: I want to buy a bouquet of flowers for my dear husband. He loves flowers! I wish the shop wasn’t across this busy road… Problem solver: Which flowers does your husband like best? Grandma: He loves red dahlia. I usually buy about 20 of them. They look so pretty in his vase at the window! Problem solver: I can get those dahlia for you quickly. Go sit on the bench over here while you’re waiting; I’ll be back in five minutes. Grandma: You would do that for me? What a generous young man you are!
A mediocre problem solver would have helped the grandma cross the road, but he might have forgotten that she needs to cross again. She must watch out for cars and protect her flowers on the way back.
A good problem solver realizes that grandma's husband wants 20 red dahlias and completes the task.
4- Rapid and intense brainstorming
Understanding a problem makes solutions easy. However, you may not have all the information needed to solve the problem.
Additionally, retrieving crucial information can be difficult.
You could start a blog. You don't know your readers' interests. You can't ask readers because you don't know who they are.
Brainstorming works here. Set a stopwatch (most smartphones have one) to ring after five minutes. In the remaining time, write down as many topics as possible.
No answer is wrong. Note everything.
Sort these topics later. Programming or data science? What might readers scroll past—are these your socks this morning?
Rank your ideas intuitively and logically. Write Medium stories using the top 35 ideas.
5 - Google it.
Doctor Google may answer this seemingly insignificant question. If you understand your problem, try googling or binging.
Someone has probably had your problem before. The problem-solver may have posted their solution online.
Use others' experiences. If you're social, ask a friend or coworker for help.
6 - Consider it later
Rest your brain.
Reread. Your brain needs rest to function.
Hustle culture encourages working 24/7. It doesn't take a neuroscientist to see that this is mental torture.
Leave an unsolvable problem. Visit friends, take a hot shower, or do whatever you enjoy outside of problem-solving.
Nap.
I get my best ideas in the morning after working on a problem. I couldn't have had these ideas last night.
Sleeping subconsciously. Leave it alone and you may be surprised by the genius it produces.
7 - Learn to live with frustration
There are problems that you’ll never solve.
Mathematicians are world-class problem-solvers. The brightest minds in history have failed to solve many mathematical problems.
A Gordian knot problem can frustrate you. You're smart!
Frustration-haters don't solve problems well. They choose simple problems to avoid frustration.
No. Great problem solvers want to solve a problem but know when to give up.
Frustration initially hurts. You adapt.
Famous last words
If you read this article, you probably solve problems. We've covered many ways to improve, so here's a summary:
Test your presumptions. Is the issue the same for everyone else when you see one? Or are your prejudices and self-judgments misguiding you?
Recognize the issue completely. On the surface, a problem may seem straightforward, but what's really going on? Try to see what the current situation might be building up to by thinking two steps ahead of the current situation.
Request more information. You are no longer a high school student. A two-sentence problem statement is not sufficient to provide a solution. Ask away if you need more details!
Think quickly and thoroughly. In a constrained amount of time, try to write down all your thoughts. All concepts are worthwhile! Later, you can order them.
Google it. There is a purpose for the internet. Use it.
Consider it later at night. A rested mind is more creative. It might seem counterintuitive to leave a problem unresolved. But while you're sleeping, your subconscious will handle the laborious tasks.
Accept annoyance as a normal part of life. Don't give up if you're feeling frustrated. It's a step in the procedure. It's also perfectly acceptable to give up on a problem because there are other, more pressing issues that need to be addressed.
You might feel stupid sometimes, but that just shows that you’re human. You care about the world and you want to make it better.
At the end of the day, that’s all there is to problem solving — making the world a little bit better.
Nate Kostar
3 years ago
# DeaMau5’s PIXELYNX and Beatport Launch Festival NFTs
Pixelynx, a music metaverse gaming platform, has teamed up with Beatport, an online music retailer focusing in electronic music, to establish a Synth Heads non-fungible token (NFT) Collection.
Richie Hawtin, aka Deadmau5, and Joel Zimmerman, nicknamed Pixelynx, have invented a new music metaverse game platform called Pixelynx. In January 2022, they released their first Beatport NFT drop, which saw 3,030 generative NFTs sell out in seconds.
The limited edition Synth Heads NFTs will be released in collaboration with Junction 2, the largest UK techno festival, and having one will grant fans special access tickets and experiences at the London-based festival.
Membership in the Synth Head community, day passes to the Junction 2 Festival 2022, Junction 2 and Beatport apparel, special vinyl releases, and continued access to future ticket drops are just a few of the experiences available.
Five lucky NFT holders will also receive a Golden Ticket, which includes access to a backstage artist bar and tickets to Junction 2's next large-scale London event this summer, in addition to full festival entrance for both days.
The Junction 2 festival will take place at Trent Park in London on June 18th and 19th, and will feature performances from Four Tet, Dixon, Amelie Lens, Robert Hood, and a slew of other artists. Holders of the original Synth Head NFT will be granted admission to the festival's guestlist as well as line-jumping privileges.
The new Synth Heads NFTs collection contain 300 NFTs.
NFTs that provide IRL utility are in high demand.
The benefits of NFT drops related to In Real Life (IRL) utility aren't limited to Beatport and Pixelynx.
Coachella, a well-known music event, recently partnered with cryptocurrency exchange FTX to offer free NFTs to 2022 pass holders. Access to a dedicated entry lane, a meal and beverage pass, and limited-edition merchandise were all included with the NFTs.
Coachella also has its own NFT store on the Solana blockchain, where fans can buy Coachella NFTs and digital treasures that unlock exclusive on-site experiences, physical objects, lifetime festival passes, and "future adventures."
Individual artists and performers have begun taking advantage of NFT technology outside of large music festivals like Coachella.
DJ Tisto has revealed that he would release a VIP NFT for his upcoming "Eagle" collection during the EDC festival in Las Vegas in 2022. This NFT, dubbed "All Access Eagle," gives collectors the best chance to get NFTs from his first drop, as well as unique access to the music "Repeat It."
NFTs are one-of-a-kind digital assets that can be verified, purchased, sold, and traded on blockchains, opening up new possibilities for artists and businesses alike. Time will tell whether Beatport and Pixelynx's Synth Head NFT collection will be successful, but if it's anything like the first release, it's a safe bet.
