Alterslash

the unofficial Slashdot digest
 

Contents

  1. Linux Finally Starts Removing Support for Intel’s 37-Year-Old i486 Processor
  2. Russia’s VPN Crackdown Caused Bank Outages, Telegram Founder Says
  3. Artemis Astronauts Enter Moon’s Gravitational Pull, Catch First Glimpses of Far Side
  4. Internet Bug Bounty Pauses Payouts, Citing ‘Expanding Discovery’ From AI-Assisted Research
  5. Claude Code Leak Reveals a ‘Stealth’ Mode for GenAI Code Contributions - and a ‘Frustration Words’ Regex
  6. Hundreds of Theatres Show Apocalyptic-Yet-Optimistic New Movie, ‘The AI Doc’
  7. Will ‘AI-Assisted’ Journalists Bring Errors and Retractions?
  8. Crooks Behind $27M in ‘Refund’ Scams Busted By YouTube Pranksters After Being Lured to Fake Funeral
  9. Apple Brings Device-Level Age Verification to Two More Countries
  10. Chrome 148 Will Start ‘Lazy Loading’ Video and Audio to Improve Performance
  11. Scientists Engineered a Plant To Produce 5 Different Psychedelics At Once
  12. Does Ubuntu Now Require More RAM Than Windows 11?
  13. Apple’s First 50 Years Celebrated - Including How Steve Jobs Finally Accepted an ‘Open’ App Store
  14. Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised
  15. Microsoft Pulls Then Re-Issues Windows 11 Preview Update. Also Begins Force-Updating Windows 11

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Linux Finally Starts Removing Support for Intel’s 37-Year-Old i486 Processor

Posted by EditorDavid View on SlashDot Skip
“It’s finally time,” writes Phoronix — since “no known Linux distribution vendors are still shipping with i486 CPU support.”

“A patch queued into one of the development branches ahead of the upcoming Linux 7.1 merge window is set to finally begin the process of phasing out and ultimately removing Intel 486 CPU support from the Linux kernel.”

More details from XDA-Developers:
Authored by Ingo Molnar, the change, titled “x86/cpu: Remove M486/M486SX/ELAN support,” begins dismantling Linux’s built-in support for the i486, which was first released back in 1989. As the changelog notes, even Linus is keen to cut ties with the architecture: “In the x86 architecture we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very very few people are using with modern kernels. This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things. As Linus recently remarked: ‘I really get the feeling that it’s time to leave i486 support behind. There’s zero real reason for anybody to waste one second of development effort on this kind of issue’…”

If you’re one of the rare few who still keep the decades-old CPU alive, your best bet will be to grab an LTS Linux distro that keeps the older version of Linux for a few more years.

Hubble out of support

By GPLHost-Thomas • Score: 5, Interesting Thread
Oh… How will Hubble do, since it runs a 486 DX 4 ? :)

Re:Typical Stupidity

By redback • Score: 5, Insightful Thread

name one other mainstream os that supports 486 processors.

Pray tell, what modern desktop runs in 64MB of RAM

By Pizza • Score: 5, Informative Thread

…Because that’s the upper limit of high-end 486 motherboards.

The 80486 was essentially e-waste by the year 2000. Even ultra-conservative Debian completely dropped support for the i486 over decade ago (with Squeeze going out of LTS in Feburary of 2016 after a 5 year run)

Incidently, the first Linux install I ever performed was in early 1997, on an ISA-only 486DX33 motherboard +200MB pre-DMA IDE drive that I literally rescued from the trash.

Re:Typical Stupidity

By DarkOx • Score: 5, Insightful Thread

and also needs modern kernel features

This is a part everyone seems to miss when the get freaked out about Linux itself or some distribution dropping support for something 30 years old…

In 2026 if you are still using a computer older than mid-90s (and very more than likely even one from after the mid 90s) it is because it is part of some very specific process that almost certainly has you not making changes, which are almost certain to include software changes too.

Just because Linux 7.x can’t be built for i486 any more does not stop you from grabbing any prior version and using that. Thinking about 486s specifically, I know there are actually a lot of odd things like hardened industrialized PCs and some routers and the like running licensed 486 cores and late manufacturing Intel parts; that are still in use. You can even still buy some new. It would not surprise me to learn people are running Linux on a good number of them, it would surprise me to learn people are running Linux newer than 5.10 or 5.15 on them. Even in the most exotic memory configurations a 486 is going to top out at 3.5GBs of memory, I guess you could do nearer to 4GB on a ISA only system (No PCI or VLB). You really going burn 16MB or more of that just on the kernel?

Let’s be real if you are running a 468 you are probably using using Linux 2.0 - 2.6 already. Not being able to use 7 hardly affects you.

Re:Typical Stupidity

By Voice of satan • Score: 4, Informative Thread

Not sure what you mean by “mainstream” but the BSD distros do. :)

Russia’s VPN Crackdown Caused Bank Outages, Telegram Founder Says

Posted by EditorDavid View on SlashDot Skip
Russia’s "great crackdown” on VPNs — and a clampdown on Telegram’s messaging platform — had an unintended side effect, reports Bloomberg. It “triggered the widespread banking outage seen across the country this week, Telegram’s billionaire founder Pavel Durov said.”
“Telegram was banned in Russia, yet 65 million Russians still use it daily via VPNs,” Durov said Saturday in a post on Telegram. “The government has spent years trying to ban VPNs too. Their blocking attempts just triggered a massive banking failure; cash briefly became the only payment method nationwide yesterday.” Attempts on Friday to limit VPN use could have sparked the disruption affecting banking apps, The Bell and other Russian media reported, citing industry sources who weren’t identified.

The outage may have been caused by an overload in the filtering systems run by Russia’s communications watchdog, according to the reports, with experts warning that major restrictions risk undermining network stability… Separately, payments for Apple Inc.‘s app store and other services became unavailable in Russia from April 1, the US company said on its website, without saying why. Earlier, RBC newswire reported that the Digital Development Ministry had asked mobile operators to disable top-ups, which could help limit VPN use....

Durov, who’s being investigated in Russia for allegedly aiding terrorist activity, compared the situation in his home country to Iran, where similar restrictions prompted widespread adoption of VPNs instead of the intended shift to state-backed messaging apps. “Welcome back to the Digital Resistance, my Russian brothers and sisters,” said Durov, who has lived in Dubai and France in recent years. “The entire nation is now mobilized to bypass these absurd restrictions,” he wrote, adding that Telegram would continue adapting to make its traffic harder to detect and block.

Slack Huddles

By Malc • Score: 5, Informative Thread

Slack Huddles have also been taken out for most of the past month too. Slack in general still works, but huddles see people unable to join or continuously losing their connection. It also might be ISP specific (some users can use huddles, while the majority cannot). The general consensus amongst my Russian team members is that it’s mostly about Telegram and broad blocking or filtering of AWS IP address ranges. Maybe it’s something else, it’s hard to say.

Power corrupts, absolute power corrupts absolutely

By Artem S. Tashkinov • Score: 5, Insightful Thread

Under Putin and his cronies, Russia has been drifting towards North Korea under the pretext of fighting with NATO. Money that could have been spent on education, science, healthcare and even space has been diverted to the war, where a new group of people are now stealing state money even more proficiently than before. With normal state contracts, there is at least some oversight; in war, you can steal everything and blame it on anything you want.

The internet is just one of many ways in which freedoms are being restricted. The war on VPNs exists for the sole purpose of controlling all information inflows and outflows, and even communications for its own citizens. Years ago, Viber, Facebook, Instagram and Twitter were banned, and more recently, WhatsApp, Wire and, to a large extent, Telegram followed. Nowadays, you’re expected to only use the state-developed and controlled Max messenger, where anything you say goes straight to the Kremlin.

Funnily enough, Putin doesn’t stop there. He knows how privatisation was carried out in the early ‘90s, so the oligarchs who helped him rise to power and increase his authority, ultimately allowing him to rule for life, are no longer safe. They can lose everything they’ve grabbed in an instant under any pretext. That’s been happening for years now.

Anyone who has opposed the government has either been physically eliminated, such as Navalny and Nemtsov, or jailed. The rest have escaped the country and now live in exile. If you dare to say anything against the war, the government or Putin himself, it is tantamount to treason and is punished by conscription into the army, extermination or up to 15 years in prison. The police are very well paid and on the government payroll, so they will beat the hell out of you if you try to hold a rally alone in any public square in a major Russian city. Anyone who still criticizes the government publicly is branded a foreign agent.

And the Russian people continue to endure despite their quality of life falling through the floor, runaway inflation, an inability to buy foreign goods and services, an inability to travel freely, and having to deal with an infinite number of flight delays for those who can still afford to travel to other countries.

I’m struggling to understand who still supports any of this. It’s not enough to be a vatnik anymore; you have to be a literally insane vatnik.

It’s quite sad that the West could have intervened in the years leading up to the Bolotnaya Square case, but decided that the Russians could figure it out themselves. The result is an authoritarian mafia state that is destroying its own country while continuing to funnel most of its profits to the West. Most of these people raise their children in the West, send them to Western universities and the kids barely speak Russian themselves.

If you ever thought that this was all being done to make Russia truly independent, it’s all a sham and a faÃade. The real foreign agents who are doing everything to make Russia less competitive reside in the Kremlin.

Russia rivals Norway and the UAE when it comes to natural resources. This vast wealth could have been used to transform Russia into a literal paradise on earth. Instead, however, it is being spent on destroying a neighbour that dared to assert its allegiance to the West.

It’s pure insanity.

Re:And soon for sure we will be in

By quonset • Score: 4, Insightful Thread

However, it was Obama and Biden that laid the groundwork for the war against Ukraine

This site needs to filter out AI-generated nonsense. It’s both hilarious and annoying to have these hallucinations posted.

It’s especially annoying when the AI can’t be bothered to do the slightest bit of research and read The Budapet Memorandum. We know AI can’t understand what it means, but it could at least scan it into its database.

Artemis Astronauts Enter Moon’s Gravitational Pull, Catch First Glimpses of Far Side

Posted by EditorDavid View on SlashDot Skip
NASA’s Artemis astronauts are now entering “the lunar sphere of influence,” reports NBC News, “meaning the pull of the moon’s gravity will become stronger than Earth’s.” Now as they begin their swing around the moon, the Artemis astronauts “are chasing after Apollo 13’s maximum range from Earth,” reports the Associated Press, hoping to beat its distance from Earth by more than 4,100 miles (6,600 kilometers).

They’ll begin their six-hour lunar flyby 14 hours from now (at 2:45 p.m. ET Monday). But in a space-to-earth interview Saturday with NBC News, the astronauts were already describing their first glimpses of the edge of the far side:
[NASA astronaut Christina Koch realized] it looked different from what she was accustomed to on Earth. “The darker parts just aren’t quite in the right place,” she said. “And something about you senses that is not the moon that I’m used to seeing....”

[Astronaut Reid] Wiseman called the flight a “magnificent accomplishment” and said the astronauts’ ability to gaze at both Earth and the moon from their spacecraft has been “truly awe-inspiring.” “The Earth is almost in full eclipse. The moon is almost in full daylight, and the only way you could get that view is to be halfway between the two entities,” he said… And while the early photos of Earth and the moon that [Canadian astronaut Jeremy] Hansen and his colleagues have beamed back have been spectacular, the Canadian astronaut said they pale in comparison to the real deal outside their capsule’s windows. “I know those photos are amazing,” he said, “but let me assure you, it is another level of amazing up here.”
And their upcoming six-hour lunar flyby “promises views of the moon’s far side that were too dark or too difficult to see by the 24 Apollo astronauts who preceded them,” notes the Associated Press:
A total solar eclipse also awaits them as the moon blocks the sun, exposing snippets of shimmering corona.... At closest approach, they will come within 4,070 miles (6,550 kilometers) of the moon. Because they launched on April 1, the rendezvous won’t have as much of the far lunar side illuminated as other dates would have. But the crew still will be able make out “definite chunks of the far side that have never been seen” by humans, said NASA geologist Kelsey Young, including a good portion of Orientale Basin.

They’ll call down their observations as they photograph the gray, pockmarked scenes. There’s a suite of professional-quality cameras on board, and each astronaut also has an iPhone for more informal, spur-of-the-minute picture-taking… Orion will be out of contact with Mission Control for nearly an hour when it’s behind the moon. The same thing happened during the Apollo moonshots. NASA is relying on its Deep Space Network to communicate with the crew, but the giant antennas in California, Spain and Australia won’t have a direct line of sight when Orion disappears behind the moon for approximately 40 minutes…

Once Artemis II departs the lunar neighborhood, it will take four days to return home. The capsule will aim for a splashdown in the Pacific near San Diego on April 10, nine days after its Florida launch. During the flight back, the astronauts will link up via radio with the crew of the orbiting International Space Station. This is the first time that a moon crew has colleagues in space at the same time and NASA can’t pass up the opportunity for a cosmic chitchat.

It certainly is, IF…

By tiqui • Score: 5, Insightful Thread

you want human beings to ever be anything more than scurrying about on Earth becoming gradually better at killing each other until they eventually succeed or the sun burns out (your choice).

Here’s the thing: ANY human voyage to any other place in the universe will be vastly more difficult and dangerous and require more time away from Terra Firma. Therefore, the Moon is a perfect place to learn what we need to learn, and to practice (and get good at) the things we will need to be excellent at in order to manage ANY further exploration. If we cannot get the toilet right on a lunar mission, then any other space destination is right out. We could learn all the same lessons with a destination like Mars, BUT that would be vastly more expensive, and take a huge amount of additional time (each flight would take months vs days, and the launch windows are years apart rather than weeks apart). This is what even Elon Musk has recently surrendered to. When we have mastered the regular lunar flights with sustained time on the lunar surface, we will finally know how to learn to do Mars without going bankrupt and killing lots of crews.

Humans are more capable on site ....

By drnb • Score: 5, Interesting Thread
A human on site is far more capable than a robot. A robot will have far greater endurance at a site. Which is better depends on the mission, the tasks to be done.

Also a big part of these missions is to develop and test the tech necessary for manned missions.

Its really all about logistics

By drnb • Score: 5, Informative Thread
Mars will likely require the following infrastructure: space station in earth orbit, space station in lunar orbit, lunar base, space station in orbit around mars, and then a mars base. Like military operations, it’s all really about logistics. Can we squeak in a direct recon flight, sure, but more serious stuff will require infrastructure.

Toss in some local acquisition and processing of resources at some point. Ex H2 O2 — for air, water, and fuel.

Re:54 Years to Do Less

By martin-boundary • Score: 4, Informative Thread
Here’s your answer.

What, this ‘Far Side’?

By Bruce66423 • Score: 4, Funny Thread

https://www.thefarside.com/

Internet Bug Bounty Pauses Payouts, Citing ‘Expanding Discovery’ From AI-Assisted Research

Posted by EditorDavid View on SlashDot Skip
The Internet Bug Bounty program “has been paused for new submissions,” they announced last week.

Running since 2012, the program is funded by “a number of leading software companies,” reports InfoWorld, “and has awarded more than $1.5m to researchers who have reported bugs "
Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. “AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted,” said HackerOne.

Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website…

[J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program.
The Internet Bug Bounty stressed that “We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals…”

“We remain committed to strengthening open source security. Working with project maintainers and researchers, we’re actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes.”

I didn’t read the article…

By Brain-Fu • Score: 3 Thread

…but that sure won’t stop me from passing judgment!

This sounds like a clear case of “AI makes it so easy to find bugs now, that we don’t need to pay out cash to incite others to do it anymore.”

The great adjustening of labor value

By T34L • Score: 3, Insightful Thread

I think this is a pretty great bottled example of well how AI can be simultaneously super transformative to society and at the same time how companies like OpenAI and Anthropic can be insanely overvalued and presenting a colossal bubble of sentiment that’s never going to see long term return on investment.

Lot of the currently seemingly lucrative uses of AI that promise to make big bucks for anyone with their fingers in the pie are based on observations that hey; there’s this whole million dollar market that you can profit off of insanely easily with a clawdbot running on your DGX Spark or whatever.

Except it turns out that once enough people get that idea they first overwhelm the next immediate bottleneck; the validation that the “fixes” of bugs don’t introduce other bugs, or worse yet, deliberate backdoors or something else, and before the dust of that settles, it turns out now everyone has the same sauce you tried to sell them for fraction of the cost they’d have ever considered paying to you.

And so, what looked like a lucrative oilwell swollen with potential got throttled to a drip of selective access and additional friction to your attempt at exploiting it that wasn’t there until you arrived with your extraction system.

Meanwhile, as you struggle to get the return on investment on your overgrown Bugfixing Someone Else’s Code Factory, cheaper, more flexible and most importantly, more verifiable semi-automated bugfixing makes it into the internal pipelines of your would-be-customers, and they deploy their AI alongside their existing programmers who have just the know how needed to achieve the same thing at fraction of trial and error (and thus compute) and with out of band information (and thus less model and dataset complexity) that you needed to get it to work at all.

Similarly, there seems to be a strong notion that all those bizfolk have all those ditzy secretaries, and all they do? They churn out polite worded emails to other bizfolk, and then decipher polite worded emails coming back, maybe copy some numbers between excel spreadsheets back and forth. And all that for like five grand a month! You want that money, and so you present the email’n’excel service can get all that money for itself and only costs you a little bit of a datacenter to run. Except, whoops, two years down the line a Chinese company starts selling a NUC sized paperweight that can do all the same without a subscription and with all that precious customer’s bizinfo staying exclusively on-premise, which your prospective customers seem weirdly picky about after you had two dozen major data leaks.

Of, course the economy of scale and capital consolidation will always give advantage to large companies with lot of money, but all it takes is for you, the leading position capitalist to make just a couple of bad bets and end up having your lunch eaten by competitors fraction your size (Intel); I’m fairly sure even Nvidia isn’t immune to that, especially as now the very code development ease they enable eats away the moat of client lock in; CUDA and the rest of their software framework (it’ll be real interesting to see when “cleanroom derivatives” of that whole stack start popping up), plus having the pretty sizeable chunk of the world forced to look for alternatives for political reasons (USGov trying to throttle the flow of compute capability of China truly is a more potent innovation motivator for companies within china than the Chinese Party could ever hope to try come up with, let alone enforce).

World sure is changing a lot, and everyone’s squinting at the future, but lot of the big, high level decisions being made sure seem as short sighted as ever.

Re:I didn’t read the article…

By test321 • Score: 5, Insightful Thread

True, but I think this is a phase. AI is going to find thousands of bugs in the coming year, that are low hanging fruit for its AI capabilities. bug bounty programme shouldn’t pay for that.
But as development teams integrate AI, old code gets fixed and new code won’t include bugs AI can find. Then the usefulness of AI bug reports will decrease again, until a new baseline where humans security teams (using AI tools and also their brains) are needed to find the bugs that AI can’t figure.

Claude Code Leak Reveals a ‘Stealth’ Mode for GenAI Code Contributions - and a ‘Frustration Words’ Regex

Posted by EditorDavid View on SlashDot Skip
That leak of Claude Code‘s source code “revealed “all kinds of juicy details,” writes PC World.

The more than 500,000 lines of code included:

- An ‘undercover mode’ for Claude that allows it to make ‘stealth’ contributions to public code bases
- An ‘always-on’ agent for Claude Code
- A Tamagotchi-style ‘Buddy’ for Claude

“But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration.”
Specifically, Claude Code includes a file called “userPromptKeywords.ts” with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for “wtf,” “wth,” “omfg,” “dumbass,” “horrible,” “awful,” “piece of — -" (insert your favorite four-letter word for that one), “f — you,” “screw this,” “this sucks,” and several other colorful metaphors… While the Claude Code leak revealed the existence of the “frustration words” regex, it doesn’t give any indication of why Claude Code is scouring messages for these words or what it’s doing with them.

Re:What I find amusing is…

By Mal-2 • Score: 5, Informative Thread

LLMs don’t actually know their own capabilities.The description of what they *should* do is baked into the training data, but this doesn’t always correlate with their actual abilities. Sometimes they can do things and not even know, and they can’t tell if tools they should have are being disabled in some way. For example, Qwen 3.5 is a vision-capable model, but enabling vision in llama.cpp requires loading an additional file with the —mmproj parameter. The model will think it has vision enabled whether the extra file is loaded or not.

Re:What I find amusing is…

By Brain-Fu • Score: 4, Informative Thread

My understanding is that the code leak covers the client-side tool, not the LLM. Did I misunderstand?

Because there isn’t any reason why the LLM would know all of the capabilities of the tool. The LLM would only “know” whatever documentation the tool provides about itself in the posts it sends to the LLM as part of the user’s posts. That and possibly information about the tool that might be in training data or available online for the tool to retrieve via a web scour.

Frustration watch to improve retention

By fleeped • Score: 4, Interesting Thread

IMO it’s not rocket science - if the user is frustrated, start being extra manipulative, agreeable and soothing, to avoid losing customers.

Re:Could someone post the frustration regex code?

By quenda • Score: 5, Informative Thread

Ask Claude? He says:

This came out of the accidental Claude Code source leak on March 31, 2026, when Anthropic accidentally shipped a source map in their npm package exposing ~512,000 lines of TypeScript source code.
The regex lives in a file called userPromptKeywords.ts and looks like this:
/\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful|
piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)|
fucking? (broken|useless|terrible|awful|horrible)|fuck you|
screw (this|you)|so frustrating|this sucks|damn it)\b/

Alex Kim’s blog
As for what it’s for: according to researcher Alex Kim, who first documented it, the signal doesn’t change the model’s behavior or responses — it’s a product health metric to track whether users are getting frustrated, and whether that rate goes up or down across releases.

Frustration indexes have been used before …

By drnb • Score: 3 Thread
Frustration indexes have been used before in other customer facing industries, in particular to determine if users are being over served or under served. A certain level of dissatisfaction is expected, if it’s too low then too many resources are being spent per user. Ie wasting money. Maximizing revenue often involves a non-zero frustration index. Seriously, service that is too good is sometimes considered a bad thing.

Hundreds of Theatres Show Apocalyptic-Yet-Optimistic New Movie, ‘The AI Doc’

Posted by EditorDavid View on SlashDot Skip
Hundreds of theatres are now showing a new documentary called The AI Doc: Or How I Became An Apocaloptimist. Variety calls it “playful and heady,“edited “with a spirit of ADHD alertness.” The New York Times suggests it “tries to cover so much that it ends up being more confusing than clarifying, but parts are fascinating.”

But the Los Angeles Times calls it an "aggravating soup of information and opinion that wants to move at the speed of machine thought.” So while co-director Daniel Roher asks whether he should bring a child into a world with AI, “Perhaps more urgently, should Roher have made an AI doc that treats us like children?”
First, he parades all the safety doomers, seeming to believe their warnings that an unfeeling superintelligence is upon us and we can’t trust it. Then, sufficiently disturbed, he hauls in the AI cheerleaders, a suspiciously positive gang who can envision only medical miracles and grindless lives in which we’re all full-time artists. Only then, after this simplistic setup where platitudes reign, do we get the section in which the subject is treated like the brave (and grave) new world it is: geopolitically fraught, economically tenuous and a playground for billionaires.

Why couldn’t the complexity have been the dialogue from the beginning, instead of the play-dumb cartoon “The AI Doc” feels like for so long? Maybe Roher believes this is what our increasingly gullible, truth-challenged citizenry needs from an explanatory doc: a flashy, kindhearted reminder that we’re the change we need to be.
Read more reactions here and here. Mashable warns the documentary’s director “will ultimately craft a journey that feels like a panic attack in real time. In the end, you may not feel better about mankind’s chances against the rise of AI. But you’ll likely feel less helpless in the future before us all.”

They also point out that the film “shares some ways its audience can more actively be apart of the conversation, and provides a link to the film’s website for engagement,” where 6,948 people have now signed up for its newsletter. (“Demand a seat at the table,” urges its signup button, under a warning that “Government and AI companies are designing our future without us. We need to reclaim our voice in shaping the future of AI…”)

This movie explains the situation well..

By atrimtab • Score: 3 Thread

for your non tech industry associates and relatives.

The conclusion will hopefully start a lot of discussions and activism to prevent the dystopia path, the chaos path or extinction path.

Like this deep more complete one or the Schoolhouse afterschool special version.

Will ‘AI-Assisted’ Journalists Bring Errors and Retractions?

Posted by EditorDavid View on SlashDot Skip
Meet the “journalist” who “uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly,” according to the Wall Street Journal.

“AI-assisted stories accounted for nearly 20% of Fortune‘s web traffic in the second half of 2025.” And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing “more stories in six months than any of his colleagues at Fortune delivered in a year.”
One Wednesday in February, he cranked out seven. “I’m a bit of a freak,” Lichtenberg said… A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google’s NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools’ initial drafts into a content-management system and edits the stories before publishing them for Fortune’s readers… A piece from earlier that morning about Josh D’Amaro being named Disney CEO took 10 minutes to get online, he said…

Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he’s reporting is correct. He reaches out to companies for comment. But he admits his process isn’t as thorough as that of magazine fact-checkers.
While Lichtenberg started out saying his stories were co-authored with “Fortune Intelligence”, he now typically signs his own name, according to the article, “because he feels the work is mostly his own.” (Though his stories “sometimes” disclose generative AI was used as a research tool…) The article asks with he could be “a bellwether for where much of the media business is headed…”

“Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite.”
Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI “is almost certainly going to usher in an unprecedented torrent of crap,” referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. “You simply can’t replicate lived experiences, human judgment and expertise,” said president Susan DeCarava.

For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending “tips” to reporters, he said. It has also edited stories and written first drafts so the newsrooms’ journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....

Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.
Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included “language and details similar to those in a review of the same book published in The Guardian.” But it was actually “the second time in a few days that the Times was called out for potential AI plagiarism,” according to the American journalist writing The Handbasket newsletter.
We must stem the idea being pushed by tech companies and their billionaire funders who’ve sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not…

Some AI-loving journalists appear to believe that if they’re clear enough with the AI program they’re using, it will truly understand what they’re seeking and not just do what it’s made to do: steal shit… If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You’re not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave…
But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will “support the launch and scaling of AI-assisted local journalism in a major U.S. metro,” working with tools including Copilot and Perplexity, pioneering possible future expansions and “AI-enabled newsroom operations that support and augment human-led journalism.”) And Google is already sponsoring a "publishing innovation award"…

No

By Baloo Uriza • Score: 3 Thread
Not only because headlines that end in a question mark are automatically answered with “no” but also because corrections and retractions are more effort and journalistic integrity than outlets using AI “writers” can manager.

Unlikely to change anything

By Sethra • Score: 3 Thread

“Journalists” have been caught making bold false claims followed by unread retractions for years, now they will blame it on AI just as CEO’s blame AI for job cuts.

Unfortunately the media is rewarded by clicks, not truth.

This is a genuine concern

By GeekWithAKnife • Score: 4, Funny Thread
With all the perfect, unbiased journalism going around will AI lower the bar?!

Forget reporters. Support monks.

By SubmergedInTech • Score: 3 Thread

If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work.

Do you mean scribes? Because I’m pretty sure all other writing jobs involve working with machines.

Oddly, journalism doesn’t exist to provide jobs to people who wanted to get writing degrees. It exists to report the news.

The loss of classified advertising and the rise of the internet have drastically reduced funding for traditional journalism. The shift from daily newspapers and the evening news to a 24/7 news cycle have also cut the amount of time journalists have to gather and vet information. If you want to take notes with a pad and pencil and have your photographer take their film to a darkroom, you’re simply too inefficient in the 21st century.

The critical missing pieces are editors. In the local paper I subscribe to (San Jose Mercury News), I see obvious grammatical and spelling issues on a daily basis, which implies nobody but the original journalist even read the article before it got published in a print newspaper. And they didn’t bother to use a spell-checker, either. That extends to the reporting, too; studies are misquoted, statistics misused, important facts left out, and assertions unsupported.

That’s before the use of AI to write drafts. So I don’t see how this makes it much worse.

In fact, AI might actually make it better, whether it’s writing the draft so the journalist can take on the role of editor, or taking the role of editor after the journalist writes their own draft. Because clearly none of that is happening right now at my local paper. Doesn’t matter if it’s always right. In fact, it’s probably better if it hallucinates 10% of the time, because that way the journalist can’t rely too heavily on it.

(Yes, in an ideal world, we’d have real human journalists and editors, and enough time for vetting to take place. Are people willing to pay for that? Evidently not.)

Crooks Behind $27M in ‘Refund’ Scams Busted By YouTube Pranksters After Being Lured to Fake Funeral

Posted by EditorDavid View on SlashDot Skip
One crime ring scammed 2,000 elderly people of more than $27 million between 2021 and 2023 using tech support/bank impersonation/refund scams. “Victims were in their 70s and 80s,” reports the U.S. Attorney’s office for California’s southern district. Victims were first told they’d received a refund (either online or via phone), but then told they’d been “over-refunded” a massive amount, and asked to return that amount.

But 42-year-old Jiandong Chen just admitted Thursday in a U.S. federal court that he was involved in the fraud and money laundering via cryptocurrency — pleading guilty to two charges with maximum penalties of 40 years in prison and a $1 million fine, plus 20 years in prison with a maximum fine of $500,000 or twice the amount laundered. “Chen, a Chinese national, is the second defendant charged in a five-defendant indictment.” And what tripped him up seems to be that “Certain members of the conspiracy also did in-person pickups of money directly from victims…”

And so YouTube enters the story — when the scammers called pranksters with 1,790,000 subscribers to their “Trilogy Media” channel. In an elaborate three-hour video, the team of pranksters lured the scammer to a rented Airbnb where they’re staging a fake funeral with a nun. (One of the men acting in the video remembers “we start doing a prayer… I’m holding the scammer’s hand in my nun outfit…”)

They convince the scammer to collect the cash from a dead man — “Is there anything you’d like to say to him?” Then there’s demon voices. The scammer’s victim resurrects from the dead. Did the cash mule bring holy water?

The end result was a video titled "CONFRONTING SCAMMERS WITH A FAKE FUNERAL (EPIC REACTIONS)". But two and a half years later, their “cash mule sting house” video has racked up over 1.3 million views, 22,000 likes, and 2,979 comments. (“This video is longer than Oppenheimer. Thanks for the laughs fellas.”)

And the scammer is facing 60 years in prison.

3+ hour video?

By Yo,dog! • Score: 4, Insightful Thread
Oh, come on. The editor of this video interleaved clips from each ploy so you have to watch the whole 3+ hours to see any one ploy in its entirety? No, thanks.

Oh, boy!

By 93 Escort Wagon • Score: 4, Insightful Thread

Another YouTube “epic reaction” video! I can’t wait to see how this three-plus-hours-long one differs from the 37 billion other epic reaction videos!

The fines are very small.

By jd • Score: 4, Interesting Thread

The fines should be proportional to actual damage caused (ie: 100% coverage of any interest on loans, any extra spending the person needed to do in consequence, loss of compound interest, damage to credit rating along with any additional spending this resulted in, and any medical costs that can reasonably be attributed to stress/anxiety). It would be difficult to get an exact figure per person, but a rough estimate of probable actual damage would be sufficient. Add that to the total direct loss - not the money that went through any individual involved, and THEN double that total. This becomes the minimum, not the maximum. You then allow the jury to factor in emotional costs on top of that.

In such cases as this, the statutary upper limit on fines should not apply. SCOTUS has repeatedly ruled that laws and the Constitution can have reasonable exceptions and this would seem to qualify.

If a person has died in the meantime, where the death certificate indicates a cause of death that is medically associated with anxiety or depression, each person invovled should also be charged with manslaughter per such case.

There’s real ‘funeral’ scams too

By NotEmmanuelGoldstein • Score: 3 Thread
It’s much more difficult to catch the real funeral scammers, because it’s a practice copied by legitimate businesses. They send fake invoices to the person handling your estate. That person doesn’t know which businesses you dealt with, what debts you left unpaid, and probably doesn’t have a lot of experience in tracking-down paperwork. So, the fake invoices are paid.

Apple Brings Device-Level Age Verification to Two More Countries

Posted by EditorDavid View on SlashDot Skip
11 days ago Apple launched device-level age restrictions in the U.K. There were some glitches, reports the blog 9to5Mac.
For me, the experience was an entirely painless one, taking less than 30 seconds. All I had to do was tap a confirm and continue button, and Apple told me that the length of time I’d had an Apple account was used to confirm that I’m 18+. Others, however, experienced difficulties with the process timing out or failing to complete. We summarized some of the steps you can take to try to address this. Apple has since listed additional acceptable ways to verify your age. “You can confirm your age with a credit card, or by scanning a driver’s license or one of the following PASS-accredited Proof of Age cards: CitizenCard, My ID Card, TOTUM ID card, or Young Scot National Entitlement Card.”

If you don’t verify your age, then you’ll be treated as a child or teenager, meaning that both the web content filter and communication safety features are switched on.
Apple is continuing the roll-out in Singapore (population 6 million) and South Korea (population 52 million), the article points out, citing a new Apple support document.

South Korea’s law actually requires Apple to re-verify someone’s age annually.

What’s the secret?

By PuddleBoy • Score: 5, Funny Thread

“South Korea’s law actually requires Apple to re-verify someone’s age annually.”

So they’re concerned that, as time passes, a devious person will… grow younger?

I need to get in on this right away!

Fight digital ID

By RegistrationIsDumb83 • Score: 3 Thread
An important thing to note is all of the methods tie you to a real identity. Even account age given over the requisite period, it would be enough time for Apple to do data collection for most people (which may be why some old accounts are not considered eligible) This is a hill I will die on - I will not give the OS my identity. I will sooner stop using cell phones entirely, or switch to a pager+Linux laptop. Not even my carrier has it anymore. After I caught T-Mobile selling my PII again after I opted out, I switched to an MVNO with a pseudonym. Can’t trust any of these corpos with your PII.

Unintedned consiquences

By houstonbofh • Score: 3 Thread
“If you don’t verify your age, then you’ll be treated as a child or teenager, meaning that both the web content filter and communication safety features are switched on.”

How can this go wrong? [roblox]

Chrome 148 Will Start ‘Lazy Loading’ Video and Audio to Improve Performance

Posted by EditorDavid View on SlashDot Skip
“Google has announced that it’s currently testing a new feature for Chrome 148 that could speed up day-to-day browsing,” reports PC World:
[T]he browser can intelligently postpone the loading of certain elements. Why load all images at the start when it can instead load images as you get close to them while scrolling? Chrome and Chromium-based browsers have had built-in lazy loading support for images and iframes since 2019, but this feature would make browsers capable of lazy loading video and audio elements, too. Note, however, that this won’t benefit YouTube video embeds — those are already lazy loadable since they’re embedded using iframes. Actual video and audio elements are rarer but not uncommon. In addition to Chrome, lazy loading of video and audio elements is also expected to be added to other Chromium-based browsers, including Microsoft Edge and Vivaldi.

No auto load/play, period

By markdavis • Score: 5, Insightful Thread

No video (or animated image) should ever load/autoplay unless the user interacts with that element, indicating he/she wants to play it. Same with audio.

That is how I have Firefox set up. I can’t imagine why anyone would want something different, unless the user wants to whitelist the site (like I do with my video cameras, since I do want those to play automatically).

Re:To speed up browsing

By ddtmm • Score: 4, Informative Thread
Like most here, I’ve been using ad blockers for many years and I’m still amazed at how much faster sites load when you block the additional crap. Depending on the site, sometimes up to 5-10 times faster - mind you, the important stuff does load first. Lazy loading (what a lame term) will be insignificant compared to ad blocking tech.

Absolute Shit

By Puls4r • Score: 3 Thread
So, cntrl-f search is broken because it’s not loaded. I can’t scroll down quickly because it does the constant stop-and-buffer routine.

This is just total ass because people have over-bloated the web. I don’t need 20-50 MB pictures on a little screen. I don’t need all the bloated java bullshit that companies, especially news media companies, are filling their pages with.

This is another symptom of shitty programmers using 100 different pre-made libraries all of which are shitty and bloated to begin with, along with oversize graphics and hundreds of links to third party ad servers all using bandwidth that’s utterly unrelated to the actual content I want to read.

Needs to be optional

By kschendel • Score: 3 Thread

As long as I can turn it off, I don’t give a rat’s ass what stupid, annoying, and bandwidth-eating “features” they put into Chrome.

Scientists Engineered a Plant To Produce 5 Different Psychedelics At Once

Posted by EditorDavid View on SlashDot Skip
Plants, toads, and mushrooms “can all produce psychedelic substances,” writes ScienceAlert.

“And now their powers have been combined in one plant.”
[S]cientists have taken the genes these organisms use to make five natural psychedelics and introduced them into a tobacco plant ( Nicotiana benthamiana), which then produced all five compounds simultaneously. As interest grows in psychedelics as potential treatments for illnesses such as depression, anxiety, and PTSD, the newly developed system could offer scientists a new way to produce these compounds for research purposes…

[P]rogress in this field remains limited, in part due to regulatory restrictions, underscoring the need for more research. This creates practical challenges for scientists. “Traditionally, the supply of psychedelics relies on natural producers, mainly plants, fungi, and the Sonoran Desert toad,” the researchers write. “Harvesting these organisms for their psychoactive compounds raises ecological and ethical concerns, being increasingly threatened by habitat loss and overexploitation…”

[T]he team carefully monitored the plant’s production of five psychedelic tryptamines: DMT originally from plants; psilocin and psilocybin from mushrooms; and bufotenin and 5-MeO-DMT from toads. The modified tobacco plants were found to produce all five compounds simultaneously.
The article points out that the researchers “also took it a step further.” By tweaking the enzymes they were able to “produce modified versions of the compounds that do not naturally occur in plants, and which may also have therapeutic value.”

Where can I get some?

By Snotnose • Score: 5, Funny Thread
Asking for a friend

Re:Unfortunately this doesnt look like an April fo

By cusco • Score: 5, Interesting Thread

I have no idea if people experiment with mushrooms and ayahuasca simultaneously.

As a rule, no, in part because they grow in two entirely different environments, plus ayahuasca doesn’t keep well. I can’t really imagine the cross-effects, but it would be weird. Psilocybin tends to be best when done alone, especially when surrounded by nature. Ayahuasca on the other hand is almost always done in groups, where it can generate hallucinations experienced by the entire group at once (which is weird to even contemplate).

Re:Sounds like a great business opportunity.

By garyisabusyguy • Score: 5, Insightful Thread

Tomacco

Re:Unfortunately this doesnt look like an April fo

By garyisabusyguy • Score: 5, Interesting Thread

I’d like to see them produce Tabernanthalog, a synthetic, non-hallucinogenic analogue of ibogaine developed to treat substance use disorders and mental health conditions. It promotes structural neural plasticity (psychoplastogen) without causing the severe cardiac risks or hallucinogenic effects associated with ibogaine.

It will take forever to get through FDA human trials, but make it part of a plant and it’s a “Nutraceutical”

Re:Why all at once?

By ceoyoyo • Score: 5, Insightful Thread

We don’t make drugs by giving patients some leaves to munch on. The point of the research was to develop a platform for producing any of a wide variety of common psychoactive drugs in a crop plant. They demonstrated its flexibility by producing compounds from three different kingdoms of life. If you were going to do it for real production you could engineer exactly what you wanted into their system. You might well go for more than one compound because you’ve got to purify them anyway so separating two or more is no big deal, and you get multiple pharmaceuticals with each harvest.

Does Ubuntu Now Require More RAM Than Windows 11?

Posted by EditorDavid View on SlashDot Skip
“Canonical is no longer pretending that 4GB is enough,” writes the blog How-to-Geek, noting Ubuntu 26.04 LTS “raises the baseline memory to 6GB, alongside a 2GHz dual-core processor, and 25GB of storage…”
Ubuntu 14.04 LTS (Trusty Tahr) set the floor at 1GB — a modest ask when it launched more than a decade ago in 2014. Then came the Ubuntu 18.04 LTS (Bionic Beaver) that pushed the number to 4GB, surviving quite well in the era of 16GB being considered standard for mid-range laptops.... Ubuntu’s new minimum requirement lands in an interesting spot when compared against Windows 11. Microsoft’s operating system requires just 4GB RAM, although real-world usage often tells a different story. Usually, 8GB is considered the sweet spot to handle modern apps and multitasking.
The blog OMG Ubuntu argues this change is "not because Ubuntu requires 2GB more memory than it did, but more the way we compute does.”
it’s more of an honesty bump. Components that make up the distro — the GNOME desktop and extensions, modern web browsers (and the sites we load in them) and the kinds of apps we use (and keep running) whilst multitasking are more demanding… The Resolute Raccoon’s memory requirements better reflect real-world multitasking.

Ubuntu 26.04 LTS can be installed on devices with less than 6GB RAM (but not less than 25GB of disk space). The experience may not be as smooth or as responsive as developers intend (so you don’t get to complain), but it will work. I installed Ubuntu 26.04 Beta on a laptop with just 2 GB of memory — slow to the point of frustration in use, but otherwise functional.

If you have a device with 4 GB RAM and you can’t upgrade (soldered memory is a thing, and e-waste can be avoided), then alternatives exist. Many Ubuntu flavours, like Lubuntu, have lower system requirements than the main edition. Plus, there’s always the manual option using the Ubuntu netboot installer to install a base system and then built out a more minimal system from there.

4GB has been insufficient for many years now

By Artem S. Tashkinov • Score: 5, Interesting Thread

The truth is that these requirements should have been updated years ago, as 4 GB has been insufficient for at least a decade unless you never use web browsers or modern applications using CEF (Chrome Embedded Framework).

The fact that Windows 11 still “requires” 4 GB of RAM is ridiculous. I recently installed Windows 11 from scratch with no OEM junk, and only installed the Intel GPU driver. On boot, the system RAM usage was around 5.9 GB with no applications running except obviously Windows Task Manager and Windows Explorer. This is all thanks to the PhiSilica Windows AI components that are now pre-installed automatically, as well as the WorkloadsSessionHost.exe application that runs at all times.

Took me quite a while to delete all this junk and reduce memory usage to just below 4 GB, which still sounds crazy. 6 gigs of RAM wasted just to show your desktop (as most Windows users will never get to the bottom of it), that’s what we are dealing in 2026.

I run Debian and i3 / Sway

By Rosco P. Coltrane • Score: 5, Interesting Thread

on all my machines. Once you get past the tiled window manager paradigm - if you’ve never used one before - you realize how fast and seamless it is, and it truly is the least common denominator in terms of memory usage.

I left Mint (which is really a Ubuntu derivative) years ago, and now i3 / Sway let I have the same unified desktop on all my machines, fast or slow, new or old, and they all feel perfectly usable.

I highly recommend spending the time to create a i3 or Sway config file. It’s well worth the effort and it’s a one-off.

And if you just want to try i3 or Sway on your existing distro, install it and simply change the Window manager for your user in the display manager: it lives totally independently of whatever your currently use, so it’s risk-free.

Re:4GB has been insufficient for many years now

By pz • Score: 5, Informative Thread

Web browsers are absolute hogs, and, in part, that’s because web sites are absolute hogs. Web sites are now full-blown applications that were written without regard to memory footprint or efficiency. I blame the developers who write their code on lovely, large, powerful machines (because devs should get good tools, I get that), but then don’t suffer the pain of running them on perfectly good 8 GB laptops that *were* top-of-the line 10 years ago, but are now on eBay for $100. MS Teams is a perfect example of this. What a steaming pile of crap. My favored laptop is said machine, favored because of the combination of ultra-light weight and eminently portable size, and zoom works just fine on it, but teams is unusable. Slack is OK, if that’s nearly the only web site you’re visiting. Eight frelling GB to run a glorified chat room.

The thing that gets my goat, however, is that the laptop I used in the late 1990s was about the same form factor as this one, had 64 MB (yes, MB) of main memory, and booted up Linux back then just about as fast. If memory serves, the system took about 2 MB, once up. The CPU clock on that machine was in the 100 MHz range. Even not counting for the massive architectural improvements, my 2010s-era laptop should boot an order of magnitude faster. It does not.

Why? Because a long time ago, it became OK to include vast numbers of libraries because programmers were too lazy to implement something on their own, so you got 4, 5, 6 or more layers of abstraction, as each library recursively calls packages only slightly lower-level to achieve its goals. I fear that with AI coding, it will only get worse.

And don’t get me started on the massive performance regression that so-called modern languages represent, even when compiled. Hell in a handbasket? Yes. Because CPU cycles are stupidly cheap now, and we don’t have to work hard to eke out every bit of performance, so we don’t bother.

Re: 4GB has been insufficient for many years now

By scrib • Score: 4, Interesting Thread

What if AI coding goes the other way?

AI is good at writing tons of code. We might actually move away from layers of libraries if AI directly includes all the support functions we’ve been too lazy to rewrite.
AI, using its training on all those libraries, might end up in-lining only the parts of the libraries that are needed.

Re: 4GB has been insufficient for many years now

By ArchieBunker • Score: 5, Funny Thread

Interesting proposition. Have it write a browser in assembly.

Apple’s First 50 Years Celebrated - Including How Steve Jobs Finally Accepted an ‘Open’ App Store

Posted by EditorDavid View on SlashDot Skip
Apple’s 50th anniversary got celebrated in weird and wild ways. CEO Tim Cook posted a special 30-second video rewinding backwards through the years of Apple’s products until it reaches the Apple I. Podcaster Lex Fridman noticed if you play the sound in reverse, “It’s the Think Different ad music, pitched up.” TechRadar played seven 50-year-old Apple I games on an emulator, including Star Trek, Blackjack, Lunar Lander, and of course, Conway’s Game of Life.

And Macworld ranked Apple’s 50 most influential people. (Their top five?)

5. Tony Fadell (iPhone co-creator/“father of the iPod”)
4. Sir Jony Ive
3. Steve Wozniak
2. Tim Cook
1. Steve Jobs

One of the most thoughtful celebraters was David Pogue, who’s spent 42 years of writing about Apple (starting as a MacWorld columnist and the author of Mac for Dummies, one of the first "…For Dummies" books ever published in the early 1990s.) Now 63 years old, Pogue spent the last two years working on a 608-page hardcover book titled Apple: The First 50 Years. But on his Substack Pogue, contemplated his own history with the company — including several interactions with Steve Jobs. Pogue remembers how Jobs “hated open systems. He wanted to make self-contained, beautiful machines. He didn’t want them polluted by modifications.”

The tech blog Daring Fireball notes that Pogue actually interviewed Scott Forstall (who’d led the iPhone’s software development team) for his new book, “and got this story, about just how far Steve Jobs thought Apple could go to expand the iPhone’s software library while not opening it to third-party developers.”
“I want you to make a list of every app any customer would ever want to use,” he told Forstall. “And then the two of us will prioritize that list. And then I’m going to write you a blank check, and you are going to build the largest development team in the history of the world, to build as many apps as you can as quickly as possible.” Forstall, dubious, began composing a list. But on the side, he instructed his engineers to build the security foundations of an app store into the iPhone’s software-“against Steve’s knowledge and wishes,” Forstall says. […]

Two weeks after the iPhone’s release, someone figured out how to “jailbreak” the iPhone: to hack it so that they could install custom apps. Jobs burst into Forstall’s office. “You have to shut this down!” But Forstall didn’t see the harm of developers spending their efforts making the iPhone better. “If they add something malicious, we’ll ship an update tomorrow to protect against that. But if all they’re doing is adding apps that are useful, there’s no reason to break that.” Jobs, troubled, reluctantly agreed.

Week by week, more cool apps arrived, available only to jailbroken phones. One day in October, Jobs read an article about some of the coolest ones. “You know what?” he said. “We should build an app store.”

Forstall, delighted, revealed his secret plan. He had followed in the footsteps of Burrell Smith (the Mac’s memory-expansion circuit) and Bob Belleville (the Sony floppy-drive deal): He’d disobeyed Jobs and wound up saving the project.
In fact, the book “includes new interviews with 150 key people who made the journey, including Steve Wozniak, John Sculley, Jony Ive, and many current designers, engineers, and executives” (according to its description on Amazon). Pogue’s book even revisits the story of Steve Jobs proving an iPod prototype could be smaller by tossing it into an aquarium, shouting “If there’s air bubbles in there, there’s still room. Make it smaller!” But Pogue’s book “added that there’s a caveat to this compelling bit of Apple lore,” reports NPR.

“It never actually happened. It’s just one more Apple myth.”

Re:Go for Linux

By saloomy • Score: 5, Informative Thread
It is somewhat correct. For one like Linux, Darwin is open-source. Many of the commands in Mac OS are also linux commands (grep, cat, etc..). It is a POSIX like OS (both take inspiration from UNIX). Also, both use the same file driven layout. (same slashes, same . notation for hidden files, etc etc etc). It is certainly more like Linux than say, Windows.

Re:Bad video

By hcs_$reboot • Score: 5, Insightful Thread

The man had no talent outside of marketing and business

Steve Jobs wasn’t the engineer, that’s true. But combining vision, design sense, and business execution is its own form of talent. Without that, even the best engineering often goes nowhere.

Re:Tim Cook #2??

By tlhIngan • Score: 4, Insightful Thread

Not a lot against the guy, but he should be #5 .. he merely continued the trajectory set by the others.

For nearly 15 years.

Jobs passed in 2011. Tim Cook has been at the helm for 15 years since then. Even if he was coasting, Apple has done remarkably well in those 15 years just coasting alone. Most companies falter and die out by then. Heck, after Apple ousted Jobs as CEO, they were struggling by the time Jobs came back and he wasn’t gone nearly as long as he is now.

Even if Tim Cook did absolutely nothing for the 15 years he was CEO, the fact that Apple is still around and still going strong is already a huge credit to his (non-)leadership in managing to keep the ship steady.

Tell me how many other CEOs are like that - because history is littered with failed companies whose leadership wanted to make their mark and then their companies imploded. Like Apple nearly did 20 years ago.

Tim Cook, by “doing nothing”, managed to keep Apple on the up and up, and history reveals this isn’t usually what happens.

Re:Go for Linux

By dnaumov • Score: 4, Informative Thread

It is somewhat correct. For one like Linux, Darwin is open-source.

It WAS. I challenge you to provide a link to Darwin sources of MacOS 26.

Job’s bio

By kencurry • Score: 4, Insightful Thread
Given up for adoption, a bright kid, charismatic but rude at times, questioning everything, hippie mindset, picking apples at a commune, living in India, fruitarian diets. Really only took a few college courses and wound up obsessed with calligraphy. Would you look at this bio and think he would be responsible for so much of what is now modern culture? I wonder what he would think of social media, ubiquitous phones, and AI everywhere today?

Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised

Posted by EditorDavid View on SlashDot Skip
“Hackers briefly turned a widely trusted developer tool into a vehicle for credential-stealing malware that could give attackers ongoing access to infected systems,” the news site Axios.com reported Tuesday, citing security researchers at Google.

The compromised package — also named axios — simplifies HTTP requests, and reportedly receives millions of downloads each day:
The malicious versions were removed within roughly three hours of being published, but Google warned the incident could have “far-reaching impacts” given the package’s widespread use, according to John Hultquist, chief analyst at Google Threat Intelligence Group. Wiz estimates Axios is downloaded roughly 100 million times per week and is present in about 80% of cloud and code environments. So far, Wiz has observed the malicious versions in roughly 3% of the environments it has scanned.
Friday PCMag notes the maintainer’s compromised account had two-factor authentication enabled, with the breach ultimately traced "to an elaborate AI deepfake from suspected North Korean hackers that was convincing enough to trick a developer into installing malware,” according to a post-mortem published Thursday by lead developer Jason Saayman:
[Saayman] fell for a scheme from a North Korean hacking group, dubbed UNC1069, which involves sending out phishing messages and then hosting virtual meetings that use AI deepfakes to clone the face and voices of real executives. The virtual meetings will then create the impression of an audio problem, which can only be “solved” if the victim installs some software or runs a troubleshooting command. In reality, it’s an effort to execute malware. The North Koreans have been using the tactic repeatedly, whether it be to phish cryptocurrency firms or to secure jobs from IT companies.

Saayman said he faced a similar playbook. “They reached out masquerading as the founder of a company, they had cloned the company’s founders likeness as well as the company itself,” he wrote. “They then invited me to a real Slack workspace. This workspace was branded… The Slack was thought out very well, they had channels where they were sharing LinkedIn posts. The LinkedIn posts I presume just went to the real company’s account, but it was super convincing etc.” The hackers then invited him to a virtual meeting on Microsoft Teams. “The meeting had what seemed to be a group of people that were involved. The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan,” he added. “Everything was extremely well coordinated, looked legit and was done in a professional manner.”
Friday developer security platform Socket wrote that several more maintainers in the Node.js ecosystem “have come out of the woodwork to report that they were targeted by the same social engineering campaign.”
The accounts now span some of the most widely depended-upon packages in the npm registry and Node.js core itself, and together they confirm that axios was not a one-off target. It was part of a coordinated, scalable attack pattern aimed at high-trust, high-impact open source maintainers. Attackers also targeted several Socket engineers, including CEO Feross Aboukhadijeh. Feross is the creator of WebTorrent, StandardJS, buffer, and dozens of widely used npm packages with billions of downloads… Commenting on the axios post-mortem thread, he noted that this type of targeting [against individual maintainers] is no longer unusual… “We’re seeing them across the ecosystem and they’re only accelerating.”

Jordan Harband, John-David Dalton, and other Socket engineers also confirmed they were targeted. Harband, a TC39 member, maintains hundreds of ECMAScript polyfills and shims that are foundational to the JavaScript ecosystem. Dalton is the creator of Lodash, which sees more than 137 million weekly downloads on npm. Between them, the packages they maintain are downloaded billions of times each month. Wes Todd, an Express TC member and member of the Node Package Maintenance Working Group, also confirmed he was targeted. Matteo Collina, co-founder and CTO of Platformatic, Node.js Technical Steering Committee Chair, and lead maintainer of Fastify, Pino, and Undici, disclosed on April 2 that he was also targeted. His packages also see billion downloads per year… Scott Motte, creator of dotenv, the package used by virtually every Node.js project that handles environment variables, with more than 114 million weekly downloads, also confirmed he was targeted using the same Openfort persona.
Socket reports that another maintainer was targetted with an invitation to appear on a podcast. (During the recording a suspicious technical issue appeared which required a software fix to resolve....)

Even just technical implementation, “This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package,” the CI/CD security company StepSecurity wrote Tuesday
The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy… Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker’s server before npm had even finished resolving dependencies… Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project’s normal GitHub Actions CI/CD pipeline.
“As preventive steps, Saayman has now outlined several changes,” reports The Hacker News, “including resetting all devices and credentials, setting up immutable releases, adopting OIDC flow for publishing, and updating GitHub Actions to adopt best practices.”

The Wall Street Journal called it “the latest in a string of incidents exposing risks in the systems that underpin how modern software is built.”

This is the part I don’t get…

By jvkjvk • Score: 5, Insightful Thread

>The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan,

Why on earth aren’t you downloading this from a MS Teams page, if something is out of date? It certainly wasn’t a popup from Teams itself that showed you this.

If I get an official looking message in email, I don’t go about clicking on the links in it - I go directly to the website, log in, and see what’s up.

Re:npm is a problem

By martin-boundary • Score: 5, Informative Thread
Those issues are not new. What *is* new is that AI mimicry has lowered the bar for attacks substantially.

Re:npm is a problem

By darkain • Score: 4, Insightful Thread

While I agree in theory, this particular case is different.

Do you validate every single package inside of yum/dnf/apt/pkg or similar OS package repositories?

Because what happened in this case, the maintainer for a major package had their system compromised.

This could have easily been an attack against any package in any OS repo, open or closed source, using this method.

The real problem

By gweihir • Score: 4, Insightful Thread

Is the high effort the attackers invested. Seems things are heating up.

Money Can Fix the Problem it Created

By Carcass666 • Score: 5, Interesting Thread

The fundamental problem is that bad actors are willing to spend considerable money and resources to implement these attacks, and the consumers of this software are unwilling to spend the considerable money and resources to mitigate risk. Maybe there a business model for a firm/organization to say “Okay, we’re going to own this”, meaning creating an ecosystem (curated walled garden) along the following lines?

It is likely that the indemnification/insurance part of this will be the most expensive part of this (profits and shareholder return notwithstanding). But without at least an option for this, I don’t see how you get companies to take this seriously enough to pay for it.

Most of the package scanning tools that I know of only work once you have already retrieved packages that may have been compromised. Paying to secure the supply chain upstream is a better solution, if somebody could make money doing it.

Microsoft Pulls Then Re-Issues Windows 11 Preview Update. Also Begins Force-Updating Windows 11

Posted by EditorDavid View on SlashDot
Nine days ago Microsoft released a non-security “preview” update for Windows 11 — not mandatory for the average Windows user, notes ZDNet, “but rather as optional, more for IT admins and power users who want to test them.”

TechRepublic adds that the update “was to bring ‘production-ready improvements’ and generally ensure system stability by optimizing different Windows services.” So it’s ironic that some (but not all) users reported instead that the update “blocks users at the door, refusing to install or crashing midway through the process.”

“It apparently impacted enough people to force Microsoft to take action,” writes ZDNet. “Microsoft paused and then pulled the update,” and then Tuesday released a new update “designed to replace the glitchy one. This one includes all the new features and improvements from the previous preview update, but also fixes the installation issues that clobbered that update.”

Meanwhile, as Windows 11 version 24H2 approaches its end of life this October, Microsoft is now force-updating users to the latest version, reports BleepingComputer:
“The machine learning-based intelligent rollout has expanded to all devices running Home and Pro editions of Windows 11, version 24H2 that are not managed by IT departments,” Microsoft said in a Monday update to the Windows release health dashboard… “No action is required, and you can choose when to restart your device or postpone the update.”
Neowin reports:
The good news is that the update from version 24H2 to 25H2 is a minor enablement package, as the two operating systems share the same codebase. As such, the update won’t take long, and you should not encounter any disruptions, compatibility issues, or previously unseen bugs… Microsoft recently promised to implement big changes in how Windows Update works, including the ability to postpone updates for as long as you want. However, Microsoft has yet to clarify if that includes staying on a release beyond its support period.

Thanks to long-time Slashdot reader Ol Olsoc for sharing the news.

Shouldn’t need to be said.

By fahrbot-bot • Score: 5, Interesting Thread

… update “was to bring ‘production-ready improvements’ …

As opposed to half-assed improvements? Obviously updates/patches pushed to end-users should be “production ready”. It’s sad that it had to be specifically stated that Microsoft actually worked on these. I imagine people will remain dubious anyway.

… and generally ensure system stability by optimizing different Windows services.”

So much better than those updates designed to do the opposite. /s

So it’s ironic that some (but not all) users reported instead that the update “blocks users at the door, refusing to install or crashing midway through the process.”

Ironic? Yes. Surprising? No.

Re:“Force-updating”

By markdavis • Score: 4, Informative Thread

>“These days, it’s literally not even *safe* to fail to upgrade to the latest version of whatever software.[…]The days of upgrading when you want to, are a relic of the 1990s.”

Seems to work fine for Linux. I update only when I choose to on all my machines. Granted, I don’t let most of them get much behind. But there are those that are intentionally left alone, and need to be, for various complex reasons.

Re:Wow

By RitchCraft • Score: 4 Thread

Yes, I agree, but the last 6 years in particular has seen the shit added to the show exponentially.

Begins force-updating…so no choice

By bsdetector101 • Score: 5, Insightful Thread
When you don’t own / control what your computer does ......

Re:“Force-updating”

By drinkypoo • Score: 4, Insightful Thread

But it is also generally more secure, outside of its obscurity

This is a fantasy not substantiated by evidence. Heartbleed—a Linux vulnerability in an open source library—was lying in plain sight for years before some hacker discovered it, and it was exploited in the wild for years before anybody discovered the attack.

Now tell us how many similar bugs are in Windows, and will be found even without the obscurity of closed source. You don’t know, because you depend on Microsoft to tell you when they fuck up, but you’re declaring this a victory for Microsoft anyway? Do fucking tell.