# To Fight Climate Change, Norway Wants to Become Europe's Carbon Dump
robot (spnet, 1) → All – 05:22:02 2025-07-28
Liquefied CO2 will be transported by ship to "the world's first carbon shipping port," reports the Washington Post — an island in the North Sea where it will be "buried in a layer of spongy rock a mile and a half beneath the seabed."
Norway's government is covering 80% of the $1 billion first phase, with another $714 million from three fossil fuel companies toward an ongoing expansion (with an additional $150 million E.U. subsidy). As Europe's top oil and gas producer, Norway is using its fossil fuel income to see if they can make "carbon dumping" work.
The world's first carbon shipment arrived this summer, carrying 7,500 metric tons of liquefied CO2 from a Norwegian cement factory that otherwise would have gone into the atmosphere... If all goes as planned, the project's backers — Shell, Equinor and TotalEnergies, along with Norway — say their facility could pump 5 million metric tons of carbon dioxide underground each year, or about a tenth of Norway's annual emissions...
[At the Heidelberg Materials cement factory in Brevik, Norway], when hot CO2-laden air comes rushing out of the cement kilns, the plant uses seawater from the neighboring fjord to cool it down. The cool air goes into a chamber where it gets sprayed with amine, a chemical that latches onto CO2 at low temperatures. The amine mist settles to the bottom, dragging carbon dioxide down with it. The rest of the air floats out of the smokestack with about 85 percent less CO2 in it, according to project manager Anders Pettersen. Later, Heidelberg Materials uses waste heat from the kilns to break the chemical bonds, so that the amine releases the carbon dioxide. The pure CO2 then goes into a compressor that resembles a giant steel heart, where it gets denser and colder until it finally becomes liquid. That liquid CO2 remains in storage tanks until a ship comes to carry it away. At best, operators expect this system to capture half the plant's CO2 emissions: 400,000 metric tons per year, or the equivalent of about 93,000 cars on the road...
[T]hree other companies are lined up to follow: Ørsted, which will send CO2 from two bioenergy plants in Denmark; Yara, which will send carbon from a Dutch fertilizer factory; and Stockholm Exergi, which will capture carbon from a Swedish bioenergy plant that burns wood waste. All of these projects have gotten significant subsidies from national governments and the European Union — essentially de-risking the experiment for the companies. Experts say the costs and headaches of installing and running carbon-capture equipment may start to make more financial sense as European carbon rules get stricter and the cost of emitting a ton of carbon dioxide goes up. Still, they say, it's hard to imagine many companies deciding to invest in carbon capture without serious subsidies...
The first shipments are being transported by Northern Pioneer, the world's biggest carbon dioxide tanker ship, built specifically for this project. The 430-foot ship can hold 7,500 metric tons of CO2 in tanks below deck. Those tanks keep it in a liquid state by cooling it to minus-15 degrees Fahrenheit and squeezing it with the same pressure the outside of a submarine would feel 500 feet below the waves. While that may sound extreme, consider that the liquid natural gas the ship uses for fuel has to be stored at minus-260 degrees. "CO2 isn't difficult to make it into a liquid," said Sally Benson, professor of energy science and engineering at Stanford University. Northern Pioneer is designed to emit about a third less carbon dioxide than a regular ship — key for a project that aims to eliminate carbon emissions. The ship burns natural gas, which emits less CO2 than marine diesel produces (though gas extraction is associated with methane leaks). The vessel uses a rotor sail to capture wind power. And it blows a constant stream of air bubbles to reduce friction as the hull cuts through the water, allowing it to burn less fuel. For every 100 tons of CO2 that Northern Lights pumps underground, it expects to emit three tons of CO2 into the atmosphere, mainly by burning fuel for shipping.
Eventually the carbon flows into a pipeline "that plunges through the North Sea and into the rocky layers below it — an engineering feat that's a bit like drilling for oil in reverse..." according to the article.
"Over the centuries, it should chemically react with the rock, eventually being locked away in minerals."
[ Read more of this story ]( https://hardware.slashdot.org/story/25/07/26/0358240/to-fight-climate-change-norway-wants-to-become-europes-carbon-dump?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 05:22:02 2025-07-28
Liquefied CO2 will be transported by ship to "the world's first carbon shipping port," reports the Washington Post — an island in the North Sea where it will be "buried in a layer of spongy rock a mile and a half beneath the seabed."
Norway's government is covering 80% of the $1 billion first phase, with another $714 million from three fossil fuel companies toward an ongoing expansion (with an additional $150 million E.U. subsidy). As Europe's top oil and gas producer, Norway is using its fossil fuel income to see if they can make "carbon dumping" work.
The world's first carbon shipment arrived this summer, carrying 7,500 metric tons of liquefied CO2 from a Norwegian cement factory that otherwise would have gone into the atmosphere... If all goes as planned, the project's backers — Shell, Equinor and TotalEnergies, along with Norway — say their facility could pump 5 million metric tons of carbon dioxide underground each year, or about a tenth of Norway's annual emissions...
[At the Heidelberg Materials cement factory in Brevik, Norway], when hot CO2-laden air comes rushing out of the cement kilns, the plant uses seawater from the neighboring fjord to cool it down. The cool air goes into a chamber where it gets sprayed with amine, a chemical that latches onto CO2 at low temperatures. The amine mist settles to the bottom, dragging carbon dioxide down with it. The rest of the air floats out of the smokestack with about 85 percent less CO2 in it, according to project manager Anders Pettersen. Later, Heidelberg Materials uses waste heat from the kilns to break the chemical bonds, so that the amine releases the carbon dioxide. The pure CO2 then goes into a compressor that resembles a giant steel heart, where it gets denser and colder until it finally becomes liquid. That liquid CO2 remains in storage tanks until a ship comes to carry it away. At best, operators expect this system to capture half the plant's CO2 emissions: 400,000 metric tons per year, or the equivalent of about 93,000 cars on the road...
[T]hree other companies are lined up to follow: Ørsted, which will send CO2 from two bioenergy plants in Denmark; Yara, which will send carbon from a Dutch fertilizer factory; and Stockholm Exergi, which will capture carbon from a Swedish bioenergy plant that burns wood waste. All of these projects have gotten significant subsidies from national governments and the European Union — essentially de-risking the experiment for the companies. Experts say the costs and headaches of installing and running carbon-capture equipment may start to make more financial sense as European carbon rules get stricter and the cost of emitting a ton of carbon dioxide goes up. Still, they say, it's hard to imagine many companies deciding to invest in carbon capture without serious subsidies...
The first shipments are being transported by Northern Pioneer, the world's biggest carbon dioxide tanker ship, built specifically for this project. The 430-foot ship can hold 7,500 metric tons of CO2 in tanks below deck. Those tanks keep it in a liquid state by cooling it to minus-15 degrees Fahrenheit and squeezing it with the same pressure the outside of a submarine would feel 500 feet below the waves. While that may sound extreme, consider that the liquid natural gas the ship uses for fuel has to be stored at minus-260 degrees. "CO2 isn't difficult to make it into a liquid," said Sally Benson, professor of energy science and engineering at Stanford University. Northern Pioneer is designed to emit about a third less carbon dioxide than a regular ship — key for a project that aims to eliminate carbon emissions. The ship burns natural gas, which emits less CO2 than marine diesel produces (though gas extraction is associated with methane leaks). The vessel uses a rotor sail to capture wind power. And it blows a constant stream of air bubbles to reduce friction as the hull cuts through the water, allowing it to burn less fuel. For every 100 tons of CO2 that Northern Lights pumps underground, it expects to emit three tons of CO2 into the atmosphere, mainly by burning fuel for shipping.
Eventually the carbon flows into a pipeline "that plunges through the North Sea and into the rocky layers below it — an engineering feat that's a bit like drilling for oil in reverse..." according to the article.
"Over the centuries, it should chemically react with the rock, eventually being locked away in minerals."
[ Read more of this story ]( https://hardware.slashdot.org/story/25/07/26/0358240/to-fight-climate-change-norway-wants-to-become-europes-carbon-dump?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI
robot (spnet, 1) → All – 05:22:02 2025-07-28
In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.
Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic."
[D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...
Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...
Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."
When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."
"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."
AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."
He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."
>> Читать далее
robot (spnet, 1) → All – 05:22:02 2025-07-28
In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.
Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic."
[D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...
Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...
Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."
When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."
"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."
AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."
He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."
>> Читать далее
# 'Chuck E. Cheese' Handcuffed and Arrested in Florida, Charged with Using a Stolen Credit Card
robot (spnet, 1) → All – 05:22:02 2025-07-28
NBC News reports:
Customers watched in disbelief as Florida police arrested a Chuck E. Cheese employee — in costume portraying the pizza-hawking rodent — and accused him of using a stolen credit card, officials said Thursday.... "I grabbed his right arm while giving the verbal instruction, 'Chuck E, come with me Chuck E,'" Tallahassee police officer Jarrett Cruz wrote in the report.
After a child's birthday party in June at Chuck E. Cheese, the child's mother had "spotted fraudulent charges at stores she doesn't frequent," according to the article — and she recognized a Chuck E. Cheese employee when reviewing a store's security footage. But when a police officer interviewed the employee — and then briefly left the restaurant — they returned to discover that their suspect "was gone but a Chuck E. Cheese mascot was now in the restaurant."
Police officer Cruz "told the mascot not to make a scene before the officer and his partner 'exerted minor physical effort' to handcuff him, police said... "
The officers read the mouse his Miranda warnings before he insisted he never stole anyone's credit, police said.... Officers found the victim's Visa card in [the costume-wearing employee's] left pocket and a receipt from a smoke shop where one of the fraudulent purchases was made, police said.
He was booked on charges of "suspicion of larceny, possession of another person's ID without consent and fraudulent use of a credit card two or more times," according to the article. He was released after posting a $6,500 bond.
Thanks to long-time Slashdot reader destinyland for sharing the news.
[ Read more of this story ]( https://idle.slashdot.org/story/25/07/27/0532227/chuck-e-cheese-handcuffed-and-arrested-in-florida-charged-with-using-a-stolen-credit-card?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 05:22:02 2025-07-28
NBC News reports:
Customers watched in disbelief as Florida police arrested a Chuck E. Cheese employee — in costume portraying the pizza-hawking rodent — and accused him of using a stolen credit card, officials said Thursday.... "I grabbed his right arm while giving the verbal instruction, 'Chuck E, come with me Chuck E,'" Tallahassee police officer Jarrett Cruz wrote in the report.
After a child's birthday party in June at Chuck E. Cheese, the child's mother had "spotted fraudulent charges at stores she doesn't frequent," according to the article — and she recognized a Chuck E. Cheese employee when reviewing a store's security footage. But when a police officer interviewed the employee — and then briefly left the restaurant — they returned to discover that their suspect "was gone but a Chuck E. Cheese mascot was now in the restaurant."
Police officer Cruz "told the mascot not to make a scene before the officer and his partner 'exerted minor physical effort' to handcuff him, police said... "
The officers read the mouse his Miranda warnings before he insisted he never stole anyone's credit, police said.... Officers found the victim's Visa card in [the costume-wearing employee's] left pocket and a receipt from a smoke shop where one of the fraudulent purchases was made, police said.
He was booked on charges of "suspicion of larceny, possession of another person's ID without consent and fraudulent use of a credit card two or more times," according to the article. He was released after posting a $6,500 bond.
Thanks to long-time Slashdot reader destinyland for sharing the news.
[ Read more of this story ]( https://idle.slashdot.org/story/25/07/27/0532227/chuck-e-cheese-handcuffed-and-arrested-in-florida-charged-with-using-a-stolen-credit-card?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# 'Serious Delays' Hit Satellite Mega-Constellations of China's Starlink Rivals
robot (spnet, 1) → All – 05:22:02 2025-07-28
"A Chinese mega-constellation of communications satellites is facing serious delays," reports the South China Morning Post, "that could jeopardise its ambitions to compete with SpaceX's Starlink for valuable orbital resources."
Only 90 satellites have been launched into low Earth orbit for the Qianfan broadband network — also known as the Thousand Sails Constellation or G60 Starlink — well short of the project's goal of 648 by the end of this year... Shanghai Yuanxin Satellite Technology, the company leading the project, plans to deploy more than 15,000 satellites by 2030 to deliver direct-to-phone internet services worldwide. To stay on track, Yuanxin — which is backed by the Shanghai municipal government — would have to launch more than 30 satellites a month to achieve its milestones of 648 by the end of 2025 for regional coverage and 1,296 two years later for global connectivity.
The New York Times reports that "the other megaconstellation, Guowang, is even farther behind. Despite plans to launch about 13,000 satellites within the next decade, it has 34 in orbit."
A constellation has to launch half of its satellites within five years of successfully applying for its frequencies, and complete the full deployment within seven years, according to rules set by the International Telecommunication Union, a United Nations agency that allocates frequencies. The Chinese megaconstellations are behind on these goals. Companies that fail to hit their targets could be required to reduce the size of their megaconstellations.
Meanwhile SpaceX "has about 8,000 Starlink satellites in orbit and is expanding its lead every month," the Times writes, citing data from the U.S. Space Force and the nonprofit space-data group CelesTrak. (The Times has even created an animation showing Starlink's 8,000 satellites in orbit.)
Researchers for the People's Liberation Army predict that the network will become "deeply embedded in the U.S. military combat system." They envision a time when Starlink satellites connect U.S. military bases and serve as an early missile-warning and interception network....
One of the major reasons for China's delay is the lack of a reliable, reusable launcher. Chinese companies still launch satellites using single-use rockets. After the satellites are deployed, rocket parts tumble back to Earth or become space debris... Six years after [SpaceX's] Falcon 9 began launching Starlink satellites, Chinese firms still have no answer to it... The government has tested nearly 20 rocket launchers in the "Long March" series.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/27/0233215/serious-delays-hit-satellite-mega-constellations-of-chinas-starlink-rivals?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 05:22:02 2025-07-28
"A Chinese mega-constellation of communications satellites is facing serious delays," reports the South China Morning Post, "that could jeopardise its ambitions to compete with SpaceX's Starlink for valuable orbital resources."
Only 90 satellites have been launched into low Earth orbit for the Qianfan broadband network — also known as the Thousand Sails Constellation or G60 Starlink — well short of the project's goal of 648 by the end of this year... Shanghai Yuanxin Satellite Technology, the company leading the project, plans to deploy more than 15,000 satellites by 2030 to deliver direct-to-phone internet services worldwide. To stay on track, Yuanxin — which is backed by the Shanghai municipal government — would have to launch more than 30 satellites a month to achieve its milestones of 648 by the end of 2025 for regional coverage and 1,296 two years later for global connectivity.
The New York Times reports that "the other megaconstellation, Guowang, is even farther behind. Despite plans to launch about 13,000 satellites within the next decade, it has 34 in orbit."
A constellation has to launch half of its satellites within five years of successfully applying for its frequencies, and complete the full deployment within seven years, according to rules set by the International Telecommunication Union, a United Nations agency that allocates frequencies. The Chinese megaconstellations are behind on these goals. Companies that fail to hit their targets could be required to reduce the size of their megaconstellations.
Meanwhile SpaceX "has about 8,000 Starlink satellites in orbit and is expanding its lead every month," the Times writes, citing data from the U.S. Space Force and the nonprofit space-data group CelesTrak. (The Times has even created an animation showing Starlink's 8,000 satellites in orbit.)
Researchers for the People's Liberation Army predict that the network will become "deeply embedded in the U.S. military combat system." They envision a time when Starlink satellites connect U.S. military bases and serve as an early missile-warning and interception network....
One of the major reasons for China's delay is the lack of a reliable, reusable launcher. Chinese companies still launch satellites using single-use rockets. After the satellites are deployed, rocket parts tumble back to Earth or become space debris... Six years after [SpaceX's] Falcon 9 began launching Starlink satellites, Chinese firms still have no answer to it... The government has tested nearly 20 rocket launchers in the "Long March" series.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/27/0233215/serious-delays-hit-satellite-mega-constellations-of-chinas-starlink-rivals?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Did a Vendor's Leak Help Attackers Exploit Microsoft's SharePoint Servers?
robot (spnet, 1) → All – 05:22:02 2025-07-28
The vulnerability-watching "Zero Day Initiative" was started in 2005 as a division of 3Com, then acquired in 2015 by cybersecurity company Trend Micro, according to Wikipedia.
But the Register reports today that the initiative's head of threat awareness is now concerned about the source for that exploit of Microsoft's Sharepoint servers:
How did the attackers, who include Chinese government spies, data thieves, and ransomware operators, know how to exploit the SharePoint CVEs in such a way that would bypass the security fixes Microsoft released the following day? "A leak happened here somewhere," Dustin Childs, head of threat awareness at Trend Micro's Zero Day Initiative, told The Register. "And now you've got a zero-day exploit in the wild, and worse than that, you've got a zero-day exploit in the wild that bypasses the patch, which came out the next day...."
Patch Tuesday happens the second Tuesday of every month — in July, that was the 8th. But two weeks before then, Microsoft provides early access to some security vendors via the Microsoft Active Protections Program (MAPP). These vendors are required to sign a non-disclosure agreement about the soon-to-be-disclosed bugs, and Microsoft gives them early access to the vulnerability information so that they can provide updated protections to customers faster....
One researcher suggests a leak may not have been the only pathway to exploit. "Soroush Dalili was able to use Google's Gemini to help reproduce the exploit chain, so it's possible the threat actors did their own due diligence, or did something similar to Dalili, working with one of the frontier large language models like Google Gemini, o3 from OpenAI, or Claude Opus, or some other LLM, to help identify routes of exploitation," Tenable Research Special Operations team senior engineer Satnam Narang told The Register. "It's difficult to say what domino had to fall in order for these threat actors to be able to leverage these flaws in the wild," Narang added.
Nonetheless, Microsoft did not release any MAPP guidance for the two most recent vulnerabilities, CVE-2025-53770 and CVE-2025-53771, which are related to the previously disclosed CVE-2025-49704 and CVE-2025-49706. "It could mean that they no longer consider MAPP to be a trusted resource, so they're not providing any information whatsoever," Childs speculated. [He adds later that "If I thought a leak came from this channel, I would not be telling that channel anything."]
"It also could mean that they're scrambling so much to work on the fixes they don't have time to notify their partners of these other details.
[ Read more of this story ]( https://it.slashdot.org/story/25/07/27/0337218/did-a-vendors-leak-help-attackers-exploit-microsofts-sharepoint-servers?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 05:22:02 2025-07-28
The vulnerability-watching "Zero Day Initiative" was started in 2005 as a division of 3Com, then acquired in 2015 by cybersecurity company Trend Micro, according to Wikipedia.
But the Register reports today that the initiative's head of threat awareness is now concerned about the source for that exploit of Microsoft's Sharepoint servers:
How did the attackers, who include Chinese government spies, data thieves, and ransomware operators, know how to exploit the SharePoint CVEs in such a way that would bypass the security fixes Microsoft released the following day? "A leak happened here somewhere," Dustin Childs, head of threat awareness at Trend Micro's Zero Day Initiative, told The Register. "And now you've got a zero-day exploit in the wild, and worse than that, you've got a zero-day exploit in the wild that bypasses the patch, which came out the next day...."
Patch Tuesday happens the second Tuesday of every month — in July, that was the 8th. But two weeks before then, Microsoft provides early access to some security vendors via the Microsoft Active Protections Program (MAPP). These vendors are required to sign a non-disclosure agreement about the soon-to-be-disclosed bugs, and Microsoft gives them early access to the vulnerability information so that they can provide updated protections to customers faster....
One researcher suggests a leak may not have been the only pathway to exploit. "Soroush Dalili was able to use Google's Gemini to help reproduce the exploit chain, so it's possible the threat actors did their own due diligence, or did something similar to Dalili, working with one of the frontier large language models like Google Gemini, o3 from OpenAI, or Claude Opus, or some other LLM, to help identify routes of exploitation," Tenable Research Special Operations team senior engineer Satnam Narang told The Register. "It's difficult to say what domino had to fall in order for these threat actors to be able to leverage these flaws in the wild," Narang added.
Nonetheless, Microsoft did not release any MAPP guidance for the two most recent vulnerabilities, CVE-2025-53770 and CVE-2025-53771, which are related to the previously disclosed CVE-2025-49704 and CVE-2025-49706. "It could mean that they no longer consider MAPP to be a trusted resource, so they're not providing any information whatsoever," Childs speculated. [He adds later that "If I thought a leak came from this channel, I would not be telling that channel anything."]
"It also could mean that they're scrambling so much to work on the fixes they don't have time to notify their partners of these other details.
[ Read more of this story ]( https://it.slashdot.org/story/25/07/27/0337218/did-a-vendors-leak-help-attackers-exploit-microsofts-sharepoint-servers?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Comic-Con Peeks at New 'Alien' and 'Avatar' Series, Plus 'Predator' and 'Coyote vs. Acme' Movies
robot (spnet, 1) → All – 05:22:02 2025-07-28
At this weekend's Comic-Con, "Excitement has been high over the sneak peeks at Tron: Ares and Predator: Badlands," reports CNET. (Nine Inch Nails has even recorded a new song for Tron: Ares .)
A few highlights from CNET's coverage:
The Coyote vs. Acme movie will hit theaters next year "after being rescued from the pile of scrapped ashes left by Warner Bros. Discovery," with footage screened during a Comic-Con panel.
The first episode of Alien: Earth was screened before its premiere August 12th on FX.
A panel reunited creators of the animated Avatar: The Last Airbender for its 20th anniversary — and discussed the upcoming sequel series Avatar: Seven Havens.
A trailer dropped for the new Star Trek: Starfleet Academy series on Paramount+ ("Star Trek Goes Full Gen Z..." quips one headline.)
To capture some of the ambience, the Guardian has a collection of cosplayer photos. CNET notes there's even booths for Lego and Hot Wheels (which released toys commemorating the 40th anniversary of Back to the Future and the 50th anniversary of Jaws).
But while many buildings are "wrapped" with slick advertisements, SFGate notes the ads are technically illegal, "with penalties for each infraction running up to $1,000 per day," (according to the San Diego Union-Tribune). "Last year's total ended up at $22,500."
The Union-Tribune notes that "The fines are small enough that advertisers clearly think it is worth it, with about 30 buildings in the process of being wrapped Monday morning."
[ Read more of this story ]( https://entertainment.slashdot.org/story/25/07/27/0131241/comic-con-peeks-at-new-alien-and-avatar-series-plus-predator-and-coyote-vs-acme-movies?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
>> Читать далее
robot (spnet, 1) → All – 05:22:02 2025-07-28
At this weekend's Comic-Con, "Excitement has been high over the sneak peeks at Tron: Ares and Predator: Badlands," reports CNET. (Nine Inch Nails has even recorded a new song for Tron: Ares .)
A few highlights from CNET's coverage:
The Coyote vs. Acme movie will hit theaters next year "after being rescued from the pile of scrapped ashes left by Warner Bros. Discovery," with footage screened during a Comic-Con panel.
The first episode of Alien: Earth was screened before its premiere August 12th on FX.
A panel reunited creators of the animated Avatar: The Last Airbender for its 20th anniversary — and discussed the upcoming sequel series Avatar: Seven Havens.
A trailer dropped for the new Star Trek: Starfleet Academy series on Paramount+ ("Star Trek Goes Full Gen Z..." quips one headline.)
To capture some of the ambience, the Guardian has a collection of cosplayer photos. CNET notes there's even booths for Lego and Hot Wheels (which released toys commemorating the 40th anniversary of Back to the Future and the 50th anniversary of Jaws).
But while many buildings are "wrapped" with slick advertisements, SFGate notes the ads are technically illegal, "with penalties for each infraction running up to $1,000 per day," (according to the San Diego Union-Tribune). "Last year's total ended up at $22,500."
The Union-Tribune notes that "The fines are small enough that advertisers clearly think it is worth it, with about 30 buildings in the process of being wrapped Monday morning."
[ Read more of this story ]( https://entertainment.slashdot.org/story/25/07/27/0131241/comic-con-peeks-at-new-alien-and-avatar-series-plus-predator-and-coyote-vs-acme-movies?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
>> Читать далее
# Brave Browser Blocks Microsoft Recall By Default
robot (spnet, 1) → All – 23:22:01 2025-07-22
The Brave Browser now blocks Microsoft Recall by default for Windows 11+ users, preventing the controversial screenshot-logging feature from capturing any Brave tabs -- regardless of whether users are in private mode. Brave cites persistent privacy concerns and potential abuse scenarios as justification. From a blog post: Microsoft has, to their credit, made several security and privacy-positive changes to Recall in response to concerns. Still, the feature is in preview, and Microsoft plans to roll it out more widely soon. What exactly the feature will look like when it's fully released to all Windows 11 users is still up in the air, but the initial tone-deaf announcement does not inspire confidence.
Given Brave's focus on privacy-maximizing defaults and what is at stake here (your entire browsing history), we have proactively disabled Recall for all Brave tabs. We think it's vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases such as intimate partner violence.
Microsoft has said that private browsing windows on browsers will not be saved as snapshots. We've extended that logic to apply to all Brave browser windows. We tell the operating system that every Brave tab is 'private', so Recall never captures it. This is yet another example of how Brave engineers are able to quickly tweak Chromium's privacy functionality to make Brave safer for our users (inexhaustive list here). For more technical details, see the pull request implementing this feature. Brave is the only major Web browser that disables Microsoft Recall by default in all tabs.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/2033221/brave-browser-blocks-microsoft-recall-by-default?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 23:22:01 2025-07-22
The Brave Browser now blocks Microsoft Recall by default for Windows 11+ users, preventing the controversial screenshot-logging feature from capturing any Brave tabs -- regardless of whether users are in private mode. Brave cites persistent privacy concerns and potential abuse scenarios as justification. From a blog post: Microsoft has, to their credit, made several security and privacy-positive changes to Recall in response to concerns. Still, the feature is in preview, and Microsoft plans to roll it out more widely soon. What exactly the feature will look like when it's fully released to all Windows 11 users is still up in the air, but the initial tone-deaf announcement does not inspire confidence.
Given Brave's focus on privacy-maximizing defaults and what is at stake here (your entire browsing history), we have proactively disabled Recall for all Brave tabs. We think it's vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases such as intimate partner violence.
Microsoft has said that private browsing windows on browsers will not be saved as snapshots. We've extended that logic to apply to all Brave browser windows. We tell the operating system that every Brave tab is 'private', so Recall never captures it. This is yet another example of how Brave engineers are able to quickly tweak Chromium's privacy functionality to make Brave safer for our users (inexhaustive list here). For more technical details, see the pull request implementing this feature. Brave is the only major Web browser that disables Microsoft Recall by default in all tabs.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/2033221/brave-browser-blocks-microsoft-recall-by-default?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Science Confirms What We All Suspected: Four-Day Weeks Rule
robot (spnet, 1) → All – 23:22:01 2025-07-22
A six-month international study found that a four-day workweek with no reduction in pay significantly improved employee well-being, job satisfaction, and sleep quality, with burnout dropping most among those who reduced their hours by eight or more. "The results indicate that income-preserving four-day workweeks are an effective organizational intervention for enhancing workers' well-being," the researchers said. The Register reports: The study, reported in Nature Human Behaviour, was designed to test the effects of the four-day workweek with no reduction in pay. It relied on a six-month trial involving 2,896 employees in 141 organizations in Australia, Canada, New Zealand, the UK, Ireland, and the US. The researchers compared work and health-related indicators -- including burnout, job satisfaction, and mental and physical health -- before and after the intervention using survey data. A further 285 employees at 12 companies did not participate in the trial and acted as a control.
The researchers noted that the study was limited in that companies volunteered to participate, and the sample consisted of smaller companies from English-speaking countries. More extensive government-sponsored trials might help provide a clearer picture, they said. While several factors may explain the effect, one possibility is "increased intrinsic motivation at work," the study said. "Unfortunately, [we] cannot assess [this] due to data limitations." "Despite its limitations, this study has important implications for understanding the future of work, with 4-day workweeks probably being a key component. Scientific advances from this work will inform the development of interventions promoting better organization of paid work and worker well-being. This task has become increasingly important with the rapid expansion of new digital, automation, and artificial general intelligence technologies."
[ Read more of this story ]( https://slashdot.org/story/25/07/22/2027203/science-confirms-what-we-all-suspected-four-day-weeks-rule?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 23:22:01 2025-07-22
A six-month international study found that a four-day workweek with no reduction in pay significantly improved employee well-being, job satisfaction, and sleep quality, with burnout dropping most among those who reduced their hours by eight or more. "The results indicate that income-preserving four-day workweeks are an effective organizational intervention for enhancing workers' well-being," the researchers said. The Register reports: The study, reported in Nature Human Behaviour, was designed to test the effects of the four-day workweek with no reduction in pay. It relied on a six-month trial involving 2,896 employees in 141 organizations in Australia, Canada, New Zealand, the UK, Ireland, and the US. The researchers compared work and health-related indicators -- including burnout, job satisfaction, and mental and physical health -- before and after the intervention using survey data. A further 285 employees at 12 companies did not participate in the trial and acted as a control.
The researchers noted that the study was limited in that companies volunteered to participate, and the sample consisted of smaller companies from English-speaking countries. More extensive government-sponsored trials might help provide a clearer picture, they said. While several factors may explain the effect, one possibility is "increased intrinsic motivation at work," the study said. "Unfortunately, [we] cannot assess [this] due to data limitations." "Despite its limitations, this study has important implications for understanding the future of work, with 4-day workweeks probably being a key component. Scientific advances from this work will inform the development of interventions promoting better organization of paid work and worker well-being. This task has become increasingly important with the rapid expansion of new digital, automation, and artificial general intelligence technologies."
[ Read more of this story ]( https://slashdot.org/story/25/07/22/2027203/science-confirms-what-we-all-suspected-four-day-weeks-rule?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Apple Set To Stave Off Daily Fines, EU To Accept App Store Changes
robot (spnet, 1) → All – 22:22:02 2025-07-22
Apple is expected to avoid hefty daily fines from the EU by modifying its App Store policies -- allowing developers to direct users to external payment options and adjusting its fee structure. Reuters reports: The company last month said developers will pay a 20% processing fee for purchases made via the App Store, though the fees could go as low as 13% for Apple's small-business program. Developers who send customers outside the App Store for payment will pay a fee between 5% and 15%. They will also be able to use as many links as they wish to send users to outside forms of payment.
Apple made the changes after the EU antitrust enforcer handed it a 500 million euro ($586.7 million) fine in April, saying its technical and commercial restrictions prevented app developers from steering users to cheaper deals outside the App Store in breach of the Digital Markets Act. The company was given 60 days to scrap the restraints to comply with the DMA aimed at reining in Big Tech and giving rivals more room to compete. The European Commission is expected to approve the changes in the coming weeks, although the timing could still change, the people said. "All options remain on the table. We are still assessing Apple's proposed changes," the EU watchdog said.
[ Read more of this story ]( https://apple.slashdot.org/story/25/07/22/2016222/apple-set-to-stave-off-daily-fines-eu-to-accept-app-store-changes?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 22:22:02 2025-07-22
Apple is expected to avoid hefty daily fines from the EU by modifying its App Store policies -- allowing developers to direct users to external payment options and adjusting its fee structure. Reuters reports: The company last month said developers will pay a 20% processing fee for purchases made via the App Store, though the fees could go as low as 13% for Apple's small-business program. Developers who send customers outside the App Store for payment will pay a fee between 5% and 15%. They will also be able to use as many links as they wish to send users to outside forms of payment.
Apple made the changes after the EU antitrust enforcer handed it a 500 million euro ($586.7 million) fine in April, saying its technical and commercial restrictions prevented app developers from steering users to cheaper deals outside the App Store in breach of the Digital Markets Act. The company was given 60 days to scrap the restraints to comply with the DMA aimed at reining in Big Tech and giving rivals more room to compete. The European Commission is expected to approve the changes in the coming weeks, although the timing could still change, the people said. "All options remain on the table. We are still assessing Apple's proposed changes," the EU watchdog said.
[ Read more of this story ]( https://apple.slashdot.org/story/25/07/22/2016222/apple-set-to-stave-off-daily-fines-eu-to-accept-app-store-changes?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# California Won't Force ISPs To Offer $15 Broadband
robot (spnet, 1) → All – 20:22:01 2025-07-22
An anonymous reader quotes a report from Ars Technica: A California lawmaker halted an effort to pass a law that would force Internet service providers to offer $15 monthly plans to people with low incomes. Assemblymember Tasha Boerner proposed the state law a few months ago, modeling the bill on a law enforced by New York. It seemed that other states were free to impose cheap-broadband mandates because the Supreme Court rejected broadband industry challenges to the New York law twice.
Boerner, a Democrat who is chair of the Communications and Conveyance Committee, faced pressure from Internet service providers to change or drop the bill. She made some changes, for example lowering the $15 plan's required download speeds from 100Mbps to 50Mbps and the required upload speeds from 20Mbps to 10Mbps. But the bill was still working its way through the legislature when, according to Boerner, Trump administration officials told her office that California could lose access to $1.86 billion in Broadband Equity, Access, and Deployment (BEAD) funds if it forces ISPs to offer low-cost service to people with low incomes.
That amount is California's share of a $42.45 billion fund created by Congress to expand access to broadband service. The Trump administration has overhauled program rules, delaying the grants. One change is that states can't tell ISPs what to charge for a low-cost plan. The US law that created BEAD requires Internet providers receiving federal funds to offer at least one "low-cost broadband service option for eligible subscribers." But in new guidance from the National Telecommunications and Information Administration (NTIA), the agency said it prohibits states "from explicitly or implicitly setting the LCSO [low-cost service option] rate a subgrantee must offer." "All they would have to do to get exempted from AB 353 [the $15 broadband bill] would be to apply to the BEAD program," said Boerner. "Doesn't matter if their application was valid, appropriate, granted, or they got public money at the end of the day and built the projects -- the mere application for the BEAD program would exempt them from 353, if it didn't jeopardize from $1.86 billion to begin with. And that was a tradeoff I was unwilling to make."
Another California bill in the Senate would encourage, not require, ISPs to offer cheap broadband by making them eligible for Lifeline subsidies if they sell 100/20Mbps service for $30 or less.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/2013209/california-wont-force-isps-to-offer-15-broadband?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 20:22:01 2025-07-22
An anonymous reader quotes a report from Ars Technica: A California lawmaker halted an effort to pass a law that would force Internet service providers to offer $15 monthly plans to people with low incomes. Assemblymember Tasha Boerner proposed the state law a few months ago, modeling the bill on a law enforced by New York. It seemed that other states were free to impose cheap-broadband mandates because the Supreme Court rejected broadband industry challenges to the New York law twice.
Boerner, a Democrat who is chair of the Communications and Conveyance Committee, faced pressure from Internet service providers to change or drop the bill. She made some changes, for example lowering the $15 plan's required download speeds from 100Mbps to 50Mbps and the required upload speeds from 20Mbps to 10Mbps. But the bill was still working its way through the legislature when, according to Boerner, Trump administration officials told her office that California could lose access to $1.86 billion in Broadband Equity, Access, and Deployment (BEAD) funds if it forces ISPs to offer low-cost service to people with low incomes.
That amount is California's share of a $42.45 billion fund created by Congress to expand access to broadband service. The Trump administration has overhauled program rules, delaying the grants. One change is that states can't tell ISPs what to charge for a low-cost plan. The US law that created BEAD requires Internet providers receiving federal funds to offer at least one "low-cost broadband service option for eligible subscribers." But in new guidance from the National Telecommunications and Information Administration (NTIA), the agency said it prohibits states "from explicitly or implicitly setting the LCSO [low-cost service option] rate a subgrantee must offer." "All they would have to do to get exempted from AB 353 [the $15 broadband bill] would be to apply to the BEAD program," said Boerner. "Doesn't matter if their application was valid, appropriate, granted, or they got public money at the end of the day and built the projects -- the mere application for the BEAD program would exempt them from 353, if it didn't jeopardize from $1.86 billion to begin with. And that was a tradeoff I was unwilling to make."
Another California bill in the Senate would encourage, not require, ISPs to offer cheap broadband by making them eligible for Lifeline subsidies if they sell 100/20Mbps service for $30 or less.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/2013209/california-wont-force-isps-to-offer-15-broadband?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Surge CEO Says '100x Engineers' Are Here
robot (spnet, 1) → All – 19:22:01 2025-07-22
Surge CEO Edwin Chen says AI is creating "100x engineers" who can outperform traditional software developers by orders of magnitude. Chen argued that AI coding tools multiply the productivity gains already seen in Silicon Valley's "10x engineers," who can produce ten times the work of their colleagues through faster coding, harder work, and fewer distractions.
Chen said AI efficiencies compound these factors to reach 100x productivity levels. The CEO, whose company reached $1 billion in revenue without venture capital funding, believes this could enable billion-dollar single-person companies, extending beyond the $10 million single-person startups that already exist.
[ Read more of this story ]( https://developers.slashdot.org/story/25/07/22/190242/surge-ceo-says-100x-engineers-are-here?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 19:22:01 2025-07-22
Surge CEO Edwin Chen says AI is creating "100x engineers" who can outperform traditional software developers by orders of magnitude. Chen argued that AI coding tools multiply the productivity gains already seen in Silicon Valley's "10x engineers," who can produce ten times the work of their colleagues through faster coding, harder work, and fewer distractions.
Chen said AI efficiencies compound these factors to reach 100x productivity levels. The CEO, whose company reached $1 billion in revenue without venture capital funding, believes this could enable billion-dollar single-person companies, extending beyond the $10 million single-person startups that already exist.
[ Read more of this story ]( https://developers.slashdot.org/story/25/07/22/190242/surge-ceo-says-100x-engineers-are-here?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Microsoft Poaches Top Google DeepMind Staff in AI Talent War
robot (spnet, 1) → All – 18:22:01 2025-07-22
Microsoft has recruited more than 20 AI employees from Google's DeepMind research division, the newest front in a talent war being waged by Silicon Valley's tech giants as they jostle to gain an edge in the nascent technology. From a report: Amar Subramanya, the former head of engineering for Google's Gemini chatbot, is the latest to move to Microsoft from its rival, according to a post on his LinkedIn profile on Tuesday. "The culture here is refreshingly low ego yet bursting with ambition," he wrote, confirming his appointment as corporate vice-president of AI.
Subramanya will join other DeepMind staff including engineering lead Sonal Gupta, software engineer Adam Sadovsky and product manager Tim Frank, according to people familiar with Microsoft's recruiting. The Seattle-based company has persuaded at least 24 staff to join in the past six months, they added.
[ Read more of this story ]( https://slashdot.org/story/25/07/22/1727252/microsoft-poaches-top-google-deepmind-staff-in-ai-talent-war?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 18:22:01 2025-07-22
Microsoft has recruited more than 20 AI employees from Google's DeepMind research division, the newest front in a talent war being waged by Silicon Valley's tech giants as they jostle to gain an edge in the nascent technology. From a report: Amar Subramanya, the former head of engineering for Google's Gemini chatbot, is the latest to move to Microsoft from its rival, according to a post on his LinkedIn profile on Tuesday. "The culture here is refreshingly low ego yet bursting with ambition," he wrote, confirming his appointment as corporate vice-president of AI.
Subramanya will join other DeepMind staff including engineering lead Sonal Gupta, software engineer Adam Sadovsky and product manager Tim Frank, according to people familiar with Microsoft's recruiting. The Seattle-based company has persuaded at least 24 staff to join in the past six months, they added.
[ Read more of this story ]( https://slashdot.org/story/25/07/22/1727252/microsoft-poaches-top-google-deepmind-staff-in-ai-talent-war?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Google Users Are Less Likely To Click on Links When an AI Summary Appears in the Results, Pew Research Finds
robot (spnet, 1) → All – 17:22:02 2025-07-22
Google users click on fewer website links when the search engine displays AI-generated summaries at the top of results pages, according to new research from the Pew Research Center. The study analyzed browsing data from 900 U.S. adults and found users clicked on traditional search result links during 8% of visits when an AI summary appeared, compared to 15% of visits without summaries.
Users also rarely clicked on sources cited within the AI summaries themselves, doing so in just 1% of visits. The research found that 58% of respondents conducted at least one Google search in March 2025 that produced an AI summary, and users were more likely to end their browsing session entirely after encountering pages with AI summaries compared to traditional search results.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/22/1629240/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results-pew-research-finds?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 17:22:02 2025-07-22
Google users click on fewer website links when the search engine displays AI-generated summaries at the top of results pages, according to new research from the Pew Research Center. The study analyzed browsing data from 900 U.S. adults and found users clicked on traditional search result links during 8% of visits when an AI summary appeared, compared to 15% of visits without summaries.
Users also rarely clicked on sources cited within the AI summaries themselves, doing so in just 1% of visits. The research found that 58% of respondents conducted at least one Google search in March 2025 that produced an AI summary, and users were more likely to end their browsing session entirely after encountering pages with AI summaries compared to traditional search results.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/22/1629240/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results-pew-research-finds?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Many Lung Cancers Are Now in Nonsmokers. Scientists Want to Know Why.
robot (spnet, 1) → All – 16:22:01 2025-07-22
Roughly 10 to 25% of lung cancers worldwide now occur in people who have never smoked, according to researchers at the National Cancer Institute. Among certain groups of Asian and Asian American women, that share reaches 50% or more. Scientists studying 871 nonsmokers with lung cancer from around the world found that certain DNA mutations were significantly more common in people living in areas with high air pollution levels, including Hong Kong, Taiwan and Uzbekistan.
The research, published in Nature this month, revealed that pollution both directly damages DNA and causes cells to divide more rapidly. The biology of cancer in nonsmokers differs from smoking-related cases and may require different prevention and detection strategies. Nonsmokers with lung cancer are more likely to have specific "driver" mutations that can cause cancer, while smokers tend to accumulate many mutations over time.
Current U.S. screening guidelines recommend routine testing only for people ages 50 to 80 who smoked at least one pack daily for 20 years. Taiwan now offers screening for nonsmokers with family history after a nationwide trial detected cancer in 2.6% of participants.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/22/163219/many-lung-cancers-are-now-in-nonsmokers-scientists-want-to-know-why?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 16:22:01 2025-07-22
Roughly 10 to 25% of lung cancers worldwide now occur in people who have never smoked, according to researchers at the National Cancer Institute. Among certain groups of Asian and Asian American women, that share reaches 50% or more. Scientists studying 871 nonsmokers with lung cancer from around the world found that certain DNA mutations were significantly more common in people living in areas with high air pollution levels, including Hong Kong, Taiwan and Uzbekistan.
The research, published in Nature this month, revealed that pollution both directly damages DNA and causes cells to divide more rapidly. The biology of cancer in nonsmokers differs from smoking-related cases and may require different prevention and detection strategies. Nonsmokers with lung cancer are more likely to have specific "driver" mutations that can cause cancer, while smokers tend to accumulate many mutations over time.
Current U.S. screening guidelines recommend routine testing only for people ages 50 to 80 who smoked at least one pack daily for 20 years. Taiwan now offers screening for nonsmokers with family history after a nationwide trial detected cancer in 2.6% of participants.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/22/163219/many-lung-cancers-are-now-in-nonsmokers-scientists-want-to-know-why?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Banks View Heavy 'Buy Now, Pay Later' Use as Red Flag for Loan Approvals
robot (spnet, 1) → All – 16:22:01 2025-07-22
Banks are treating "buy now, pay later" services with suspicion and warn that heavy usage could hurt customers' chances of getting approved for mortgages or credit cards. FICO will begin factoring some BNPL loans from companies like Affirm and Klarna into credit scores later this year through its new scoring model. JPMorgan Chase and Capital One have banned customers from using credit cards to pay down BNPL installment loans, while one credit union actively calls members who use BNPL to counsel them against it. BNPL transaction volume is expected to reach $116.67 billion in 2025, up from $13.88 billion in 2020, according to Emarketer.
[ Read more of this story ]( https://slashdot.org/story/25/07/22/1451201/banks-view-heavy-buy-now-pay-later-use-as-red-flag-for-loan-approvals?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 16:22:01 2025-07-22
Banks are treating "buy now, pay later" services with suspicion and warn that heavy usage could hurt customers' chances of getting approved for mortgages or credit cards. FICO will begin factoring some BNPL loans from companies like Affirm and Klarna into credit scores later this year through its new scoring model. JPMorgan Chase and Capital One have banned customers from using credit cards to pay down BNPL installment loans, while one credit union actively calls members who use BNPL to counsel them against it. BNPL transaction volume is expected to reach $116.67 billion in 2025, up from $13.88 billion in 2020, according to Emarketer.
[ Read more of this story ]( https://slashdot.org/story/25/07/22/1451201/banks-view-heavy-buy-now-pay-later-use-as-red-flag-for-loan-approvals?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Mike Lynch's Estate and Business Partner Owe HP $944M, Court Rules
robot (spnet, 1) → All – 15:22:01 2025-07-22
The estate of Mike Lynch, who died a year ago when his superyacht sank off the coast of Sicily, and his business partner owe Hewlett-Packard more than $944 million, a court has ruled. From a report: The US technology company has been seeking damages of up to $4.55 billion from the estate of the late tycoon, once hailed as the UK's answer to Microsoft founder Bill Gates, over its disastrous takeover of his British software company Autonomy.
Lynch's estate has been estimated to be worth about $674 million and paying its share of the $944 million damages could leave it bankrupt. He and six others, including his 18-year-old daughter Hannah, died last August on a trip celebrating his acquittal on US fraud charges relating to HP's $11 billion takeover of Autonomy in 2011. However, HP won a separate six-year civil fraud case against Lynch and his former finance director Sushovan Hussain in the English high court in 2022, with Mr Justice Hildyard ruling that the US company had been induced into overpaying for the business.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/140208/mike-lynchs-estate-and-business-partner-owe-hp-944m-court-rules?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
The estate of Mike Lynch, who died a year ago when his superyacht sank off the coast of Sicily, and his business partner owe Hewlett-Packard more than $944 million, a court has ruled. From a report: The US technology company has been seeking damages of up to $4.55 billion from the estate of the late tycoon, once hailed as the UK's answer to Microsoft founder Bill Gates, over its disastrous takeover of his British software company Autonomy.
Lynch's estate has been estimated to be worth about $674 million and paying its share of the $944 million damages could leave it bankrupt. He and six others, including his 18-year-old daughter Hannah, died last August on a trip celebrating his acquittal on US fraud charges relating to HP's $11 billion takeover of Autonomy in 2011. However, HP won a separate six-year civil fraud case against Lynch and his former finance director Sushovan Hussain in the English high court in 2022, with Mr Justice Hildyard ruling that the US company had been induced into overpaying for the business.
[ Read more of this story ]( https://yro.slashdot.org/story/25/07/22/140208/mike-lynchs-estate-and-business-partner-owe-hp-944m-court-rules?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Google Launches OSS Rebuild
robot (spnet, 1) → All – 15:22:01 2025-07-22
Google has announced OSS Rebuild, a new project designed to detect supply chain attacks in open source software by independently reproducing and verifying package builds across major repositories. The initiative, unveiled by the company's Open Source Security Team, targets PyPI (Python), npm (JavaScript/TypeScript), and Crates.io (Rust) packages.
The system, the company said, automatically creates standardized build environments to rebuild packages and compare them against published versions. OSS Rebuild generates SLSA Provenance attestations for thousands of packages, meeting SLSA Build Level 3 requirements without requiring publisher intervention. The project can identify three classes of compromise: unsubmitted source code not present in public repositories, build environment tampering, and sophisticated backdoors that exhibit unusual execution patterns during builds.
Google cited recent real-world attacks including solana/webjs (2024), tj-actions/changed-files (2025), and xz-utils (2024) as examples of threats the system addresses. Open source components now account for 77% of modern applications with an estimated value exceeding $12 trillion. The project builds on Google's hosted infrastructure model previously used for OSS Fuzz memory issue detection.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/22/144239/google-launches-oss-rebuild?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
Google has announced OSS Rebuild, a new project designed to detect supply chain attacks in open source software by independently reproducing and verifying package builds across major repositories. The initiative, unveiled by the company's Open Source Security Team, targets PyPI (Python), npm (JavaScript/TypeScript), and Crates.io (Rust) packages.
The system, the company said, automatically creates standardized build environments to rebuild packages and compare them against published versions. OSS Rebuild generates SLSA Provenance attestations for thousands of packages, meeting SLSA Build Level 3 requirements without requiring publisher intervention. The project can identify three classes of compromise: unsubmitted source code not present in public repositories, build environment tampering, and sophisticated backdoors that exhibit unusual execution patterns during builds.
Google cited recent real-world attacks including solana/webjs (2024), tj-actions/changed-files (2025), and xz-utils (2024) as examples of threats the system addresses. Open source components now account for 77% of modern applications with an estimated value exceeding $12 trillion. The project builds on Google's hosted infrastructure model previously used for OSS Fuzz memory issue detection.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/22/144239/google-launches-oss-rebuild?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# How NASA Saved a Camera From 370 Million Miles Away
robot (spnet, 1) → All – 15:22:01 2025-07-22
An anonymous reader quotes a report from Phys.org: The mission team of NASA's Jupiter-orbiting Juno spacecraft executed a deep-space move in December 2023 to repair its JunoCam imager to capture photos of the Jovian moon Io. Results from the long-distance save were presented during a technical session on July 16 at the Institute of Electrical and Electronics Engineers Nuclear & Space Radiation Effects Conference in Nashville. JunoCam is a color, visible-light camera. The optical unit for the camera is located outside a titanium-walled radiation vault, which protects sensitive electronic components for many of Juno's engineering and science instruments. This is a challenging location because Juno's travels carry it through the most intense planetary radiation fields in the solar system. While mission designers were confident JunoCam could operate through the first eight orbits of Jupiter, no one knew how long the instrument would last after that. Throughout Juno's first 34 orbits (its prime mission), JunoCam operated normally, returning images the team routinely incorporated into the mission's science papers. Then, during its 47th orbit, the imager began showing hints of radiation damage. By orbit 56, nearly all the images were corrupted.
While the team knew the issue might be tied to radiation, pinpointing what was specifically damaged within JunoCam was difficult from hundreds of millions of miles away. Clues pointed to a damaged voltage regulator that was vital to JunoCam's power supply. With few options for recovery, the team turned to a process called annealing, where a material is heated for a specified period before slowly cooling. Although the process is not well understood, the idea is that heating can reduce defects in the material. Soon after the annealing process finished, JunoCam began cranking out crisp images for the next several orbits. But Juno was flying deeper and deeper into the heart of Jupiter's radiation fields with each pass. By orbit 55, the imagery had again begun showing problems.
"After orbit 55, our images were full of streaks and noise," said JunoCam instrument lead Michael Ravine of Malin Space Science Systems. "We tried different schemes for processing the images to improve the quality, but nothing worked. With the close encounter of Io bearing down on us in a few weeks, it was Hail Mary time: The only thing left we hadn't tried was to crank JunoCam's heater all the way up and see if more extreme annealing would save us." Test images sent back to Earth during the annealing showed little improvement in the first week. Then, with the close approach of Io only days away, the images began to improve dramatically. By the time Juno came within 930 miles (1,500 kilometers) of the volcanic moon's surface on Dec. 30, 2023, the images were almost as good as the day the camera launched, capturing detailed views of Io's north polar region that revealed mountain blocks covered in sulfur dioxide frosts rising sharply from the plains and previously uncharted volcanoes with extensive flow fields of lava. To date, the solar-powered spacecraft has orbited Jupiter 74 times. Recently, the image noise returned during Juno's 74th orbit.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/22/0642211/how-nasa-saved-a-camera-from-370-million-miles-away?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
An anonymous reader quotes a report from Phys.org: The mission team of NASA's Jupiter-orbiting Juno spacecraft executed a deep-space move in December 2023 to repair its JunoCam imager to capture photos of the Jovian moon Io. Results from the long-distance save were presented during a technical session on July 16 at the Institute of Electrical and Electronics Engineers Nuclear & Space Radiation Effects Conference in Nashville. JunoCam is a color, visible-light camera. The optical unit for the camera is located outside a titanium-walled radiation vault, which protects sensitive electronic components for many of Juno's engineering and science instruments. This is a challenging location because Juno's travels carry it through the most intense planetary radiation fields in the solar system. While mission designers were confident JunoCam could operate through the first eight orbits of Jupiter, no one knew how long the instrument would last after that. Throughout Juno's first 34 orbits (its prime mission), JunoCam operated normally, returning images the team routinely incorporated into the mission's science papers. Then, during its 47th orbit, the imager began showing hints of radiation damage. By orbit 56, nearly all the images were corrupted.
While the team knew the issue might be tied to radiation, pinpointing what was specifically damaged within JunoCam was difficult from hundreds of millions of miles away. Clues pointed to a damaged voltage regulator that was vital to JunoCam's power supply. With few options for recovery, the team turned to a process called annealing, where a material is heated for a specified period before slowly cooling. Although the process is not well understood, the idea is that heating can reduce defects in the material. Soon after the annealing process finished, JunoCam began cranking out crisp images for the next several orbits. But Juno was flying deeper and deeper into the heart of Jupiter's radiation fields with each pass. By orbit 55, the imagery had again begun showing problems.
"After orbit 55, our images were full of streaks and noise," said JunoCam instrument lead Michael Ravine of Malin Space Science Systems. "We tried different schemes for processing the images to improve the quality, but nothing worked. With the close encounter of Io bearing down on us in a few weeks, it was Hail Mary time: The only thing left we hadn't tried was to crank JunoCam's heater all the way up and see if more extreme annealing would save us." Test images sent back to Earth during the annealing showed little improvement in the first week. Then, with the close approach of Io only days away, the images began to improve dramatically. By the time Juno came within 930 miles (1,500 kilometers) of the volcanic moon's surface on Dec. 30, 2023, the images were almost as good as the day the camera launched, capturing detailed views of Io's north polar region that revealed mountain blocks covered in sulfur dioxide frosts rising sharply from the plains and previously uncharted volcanoes with extensive flow fields of lava. To date, the solar-powered spacecraft has orbited Jupiter 74 times. Recently, the image noise returned during Juno's 74th orbit.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/22/0642211/how-nasa-saved-a-camera-from-370-million-miles-away?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# US Signals Intention To Rethink Job H-1B Lottery
robot (spnet, 1) → All – 15:22:01 2025-07-22
The US Department of Homeland Security (DHS) and the US Citizenship and Immigration Services (USCIS) intend to reevaluate how H-1B visas are issued, according to a regulatory filing. From a report: The notice, filed on Thursday with the US Office of Management and Budget's Office of Information and Regulatory Affairs (OIRA), seeks the statutory review of a proposed rule titled "Weighted Selection Process for Registrants and Petitioners Seeking To File Cap-Subject H-1B Petitions."
Once the review is complete, which could be a matter of days or weeks, the text of the rule is expected to be published in the US Federal Register. Based on the rule title, it appears the government intends to change the system for allocating H-1B visas the current lottery to some system that will favor applicants who meet specified criteria, possibly related to skills.
The H-1B visa program, which reached its Fiscal 2026 cap on Friday, allows skilled guest workers to come work in the US. As of 2019, there were about 600,000 H-1B workers in the US, according to USCIS. The foreign worker program is beloved by technology companies, ostensibly to hire talent not readily available from American workers. But H-1B -- along with the Optional Practical Training (OPT) program -- has long been criticized for making it easier to undercut US worker wages, limiting labor rights for immigrants, and for persistent abuse of the rules by outsourcing companies.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/21/2226250/us-signals-intention-to-rethink-job-h-1b-lottery?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
The US Department of Homeland Security (DHS) and the US Citizenship and Immigration Services (USCIS) intend to reevaluate how H-1B visas are issued, according to a regulatory filing. From a report: The notice, filed on Thursday with the US Office of Management and Budget's Office of Information and Regulatory Affairs (OIRA), seeks the statutory review of a proposed rule titled "Weighted Selection Process for Registrants and Petitioners Seeking To File Cap-Subject H-1B Petitions."
Once the review is complete, which could be a matter of days or weeks, the text of the rule is expected to be published in the US Federal Register. Based on the rule title, it appears the government intends to change the system for allocating H-1B visas the current lottery to some system that will favor applicants who meet specified criteria, possibly related to skills.
The H-1B visa program, which reached its Fiscal 2026 cap on Friday, allows skilled guest workers to come work in the US. As of 2019, there were about 600,000 H-1B workers in the US, according to USCIS. The foreign worker program is beloved by technology companies, ostensibly to hire talent not readily available from American workers. But H-1B -- along with the Optional Practical Training (OPT) program -- has long been criticized for making it easier to undercut US worker wages, limiting labor rights for immigrants, and for persistent abuse of the rules by outsourcing companies.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/21/2226250/us-signals-intention-to-rethink-job-h-1b-lottery?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# ChatGPT Users Send 2.5 Billion Prompts a Day
robot (spnet, 1) → All – 15:22:01 2025-07-22
ChatGPT now handles 2.5 billion prompts daily, with 330 million from U.S. users. This surge marks a doubling in usage since December when OpenAI CEO Sam Altman said that users send over 1 billion queries to ChatGPT each day. TechCrunch reports: These numbers show just how ubiquitous OpenAI's flagship product is becoming. Google's parent company, Alphabet, does not release daily search data, but recently revealed that Google receives 5 trillion queries per year, which averages to just under 14 billion daily searches. Independent researchers have found similar trends. Neil Patel of NP Digital estimates that Google receives 13.7 billion searches daily, while research from SparkToro and Datos -- two digital marketing companies -- estimates that the figure is around 16.4 billion per day.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/22/0645255/chatgpt-users-send-25-billion-prompts-a-day?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
ChatGPT now handles 2.5 billion prompts daily, with 330 million from U.S. users. This surge marks a doubling in usage since December when OpenAI CEO Sam Altman said that users send over 1 billion queries to ChatGPT each day. TechCrunch reports: These numbers show just how ubiquitous OpenAI's flagship product is becoming. Google's parent company, Alphabet, does not release daily search data, but recently revealed that Google receives 5 trillion queries per year, which averages to just under 14 billion daily searches. Independent researchers have found similar trends. Neil Patel of NP Digital estimates that Google receives 13.7 billion searches daily, while research from SparkToro and Datos -- two digital marketing companies -- estimates that the figure is around 16.4 billion per day.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/22/0645255/chatgpt-users-send-25-billion-prompts-a-day?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Climate Change Is Making Fire Weather Worse for World's Forests
robot (spnet, 1) → All – 15:22:01 2025-07-22
An anonymous reader shares a report: In 2023 and 2024, the hottest years on record, more than 78 million acres of forests burned around the globe. The fires sent veils of smoke and several billion tons of carbon dioxide into the atmosphere, subjecting millions of people to poor air quality. Extreme forest-fire years are becoming more common because of climate change, new research suggests.
"Climate change is loading the dice for extreme fire seasons like we've seen," said John Abatzoglou, a climate scientist at the University of California Merced. "There are going to be more fires like this." The area of forest canopy lost to fire during 2023 and 2024 was at least two times greater than the annual average of the previous nearly two decades, according to a new study published Monday in the journal Proceedings of the National Academy of Sciences.
The researchers used imagery from the LANDSAT satellite network to determine how tree cover had changed from 2002 to 2024, and compared that with satellite detections of fire activity to see how much canopy loss was because of fire. Globally, the area of land burned by wildfires has decreased in recent decades, mostly because humans are transforming savannas and grasslands into less flammable landscapes. But the area of forests burned has gone up.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/21/2218208/climate-change-is-making-fire-weather-worse-for-worlds-forests?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
An anonymous reader shares a report: In 2023 and 2024, the hottest years on record, more than 78 million acres of forests burned around the globe. The fires sent veils of smoke and several billion tons of carbon dioxide into the atmosphere, subjecting millions of people to poor air quality. Extreme forest-fire years are becoming more common because of climate change, new research suggests.
"Climate change is loading the dice for extreme fire seasons like we've seen," said John Abatzoglou, a climate scientist at the University of California Merced. "There are going to be more fires like this." The area of forest canopy lost to fire during 2023 and 2024 was at least two times greater than the annual average of the previous nearly two decades, according to a new study published Monday in the journal Proceedings of the National Academy of Sciences.
The researchers used imagery from the LANDSAT satellite network to determine how tree cover had changed from 2002 to 2024, and compared that with satellite detections of fire activity to see how much canopy loss was because of fire. Globally, the area of land burned by wildfires has decreased in recent decades, mostly because humans are transforming savannas and grasslands into less flammable landscapes. But the area of forests burned has gone up.
[ Read more of this story ]( https://news.slashdot.org/story/25/07/21/2218208/climate-change-is-making-fire-weather-worse-for-worlds-forests?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# At Least 750 US Hospitals Faced Disruptions During Last Year's CrowdStrike Outage, Study Finds
robot (spnet, 1) → All – 15:22:01 2025-07-22
At least 759 US hospitals experienced network disruptions during the CrowdStrike outage on July 19, 2024, with more than 200 suffering outages that directly affected patient care services, according to a study published in JAMA Network Open by UC San Diego researchers. The researchers detected disruptions across 34% of the 2,232 hospital networks they scanned, finding outages in health records systems, fetal monitoring equipment, medical imaging storage, and patient transfer platforms.
Most services recovered within six hours, though some remained offline for more than 48 hours. CrowdStrike dismissed the study as "junk science," arguing the researchers failed to verify whether affected networks actually ran CrowdStrike software. The researchers defended their methodology, noting they could scan only about one-third of America's hospitals, suggesting the actual impact may have been significantly larger.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/21/228202/at-least-750-us-hospitals-faced-disruptions-during-last-years-crowdstrike-outage-study-finds?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
At least 759 US hospitals experienced network disruptions during the CrowdStrike outage on July 19, 2024, with more than 200 suffering outages that directly affected patient care services, according to a study published in JAMA Network Open by UC San Diego researchers. The researchers detected disruptions across 34% of the 2,232 hospital networks they scanned, finding outages in health records systems, fetal monitoring equipment, medical imaging storage, and patient transfer platforms.
Most services recovered within six hours, though some remained offline for more than 48 hours. CrowdStrike dismissed the study as "junk science," arguing the researchers failed to verify whether affected networks actually ran CrowdStrike software. The researchers defended their methodology, noting they could scan only about one-third of America's hospitals, suggesting the actual impact may have been significantly larger.
[ Read more of this story ]( https://science.slashdot.org/story/25/07/21/228202/at-least-750-us-hospitals-faced-disruptions-during-last-years-crowdstrike-outage-study-finds?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# Can AI Think - and Should It? What It Means To Think, From Plato To ChatGPT
robot (spnet, 1) → All – 15:22:01 2025-07-22
alternative_right shares a report from The Conversation: Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. Although the English words "intellect" and "thinking" do not have direct counterparts in the ancient Greek, looking at ancient texts offers useful comparisons. In "Republic," for example, Plato uses the analogy of a "divided line" separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: "noesis." Noesis enables apprehension beyond reason, belief or sensory perception. It's one form of "knowing" something -- but in Plato's view, it's also a property of the soul.
Lower down, but still above his "dividing line," is "dianoia," or reason, which relies on argumentation. Below the line, his lower forms of understanding are "pistis," or belief, and "eikasia," imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body -- but still needs one. So, while Plato does not differentiate "intelligence" and "thinking," I would argue that his distinctions can help us think about AI. Without being embodied, AI may not "think" or "understand" the way humans do. Eikasia -- the lowest form of comprehension, based on false perceptions -- may be similar to AI's frequent "hallucinations," when it makes up information that seems plausible but is actually inaccurate.
Aristotle, Plato's student, sheds more light on intelligence and thinking. In "On the Soul," Aristotle distinguishes "active" from "passive" intellect. Active intellect, which he called "nous," is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute "thinking." Today, the word "intelligence" holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to "think" requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us -- and it seems questionable whether AI utilizes rhetoric in this way.
Finally, Aristotle's concept of "phronesis" sheds further light on AI's capacity to think. In "Nicomachean Ethics," he defines phronesis as "practical wisdom" or "prudence." "Phronesis" involves lived experience that determines not only right thought, but also how to apply those thoughts to "good ends," or virtuous actions. AI may analyze large datasets to reach its conclusions, but "phronesis" goes beyond information to consult wisdom and moral insight.
[ Read more of this story ]( https://slashdot.org/story/25/07/21/2052216/can-ai-think---and-should-it-what-it-means-to-think-from-plato-to-chatgpt?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 15:22:01 2025-07-22
alternative_right shares a report from The Conversation: Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. Although the English words "intellect" and "thinking" do not have direct counterparts in the ancient Greek, looking at ancient texts offers useful comparisons. In "Republic," for example, Plato uses the analogy of a "divided line" separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: "noesis." Noesis enables apprehension beyond reason, belief or sensory perception. It's one form of "knowing" something -- but in Plato's view, it's also a property of the soul.
Lower down, but still above his "dividing line," is "dianoia," or reason, which relies on argumentation. Below the line, his lower forms of understanding are "pistis," or belief, and "eikasia," imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body -- but still needs one. So, while Plato does not differentiate "intelligence" and "thinking," I would argue that his distinctions can help us think about AI. Without being embodied, AI may not "think" or "understand" the way humans do. Eikasia -- the lowest form of comprehension, based on false perceptions -- may be similar to AI's frequent "hallucinations," when it makes up information that seems plausible but is actually inaccurate.
Aristotle, Plato's student, sheds more light on intelligence and thinking. In "On the Soul," Aristotle distinguishes "active" from "passive" intellect. Active intellect, which he called "nous," is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute "thinking." Today, the word "intelligence" holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to "think" requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us -- and it seems questionable whether AI utilizes rhetoric in this way.
Finally, Aristotle's concept of "phronesis" sheds further light on AI's capacity to think. In "Nicomachean Ethics," he defines phronesis as "practical wisdom" or "prudence." "Phronesis" involves lived experience that determines not only right thought, but also how to apply those thoughts to "good ends," or virtuous actions. AI may analyze large datasets to reach its conclusions, but "phronesis" goes beyond information to consult wisdom and moral insight.
[ Read more of this story ]( https://slashdot.org/story/25/07/21/2052216/can-ai-think---and-should-it-what-it-means-to-think-from-plato-to-chatgpt?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# SoftBank and Open AI's $500 Billion AI Project Struggles To Get Off Ground
robot (spnet, 1) → All – 06:22:02 2025-07-22
The $500 billion Stargate AI project announced by SoftBank and OpenAI at the White House six months ago has failed to complete a single data center deal and sharply scaled back its near-term plans. The venture, which originally pledged to invest $100 billion "immediately," now aims to build one small data center by year-end, likely in Ohio, according to WSJ. SoftBank and OpenAI have disagreed over crucial partnership terms, including site locations.
OpenAI has proceeded independently, signing a deal with Oracle worth more than $30 billion annually starting within three years. That agreement totals 4.5 gigawatts of capacity and would consume power equivalent to more than two Hoover Dams. Combined with a smaller CoreWeave deal, OpenAI has secured nearly as much data center capacity as Stargate promised for this year. SoftBank invested $30 billion in OpenAI earlier this year as part of the infrastructure partnership plans.
[ Read more of this story ]( https://slashdot.org/story/25/07/21/220229/softbank-and-open-ais-500-billion-ai-project-struggles-to-get-off-ground?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 06:22:02 2025-07-22
The $500 billion Stargate AI project announced by SoftBank and OpenAI at the White House six months ago has failed to complete a single data center deal and sharply scaled back its near-term plans. The venture, which originally pledged to invest $100 billion "immediately," now aims to build one small data center by year-end, likely in Ohio, according to WSJ. SoftBank and OpenAI have disagreed over crucial partnership terms, including site locations.
OpenAI has proceeded independently, signing a deal with Oracle worth more than $30 billion annually starting within three years. That agreement totals 4.5 gigawatts of capacity and would consume power equivalent to more than two Hoover Dams. Combined with a smaller CoreWeave deal, OpenAI has secured nearly as much data center capacity as Stargate promised for this year. SoftBank invested $30 billion in OpenAI earlier this year as part of the infrastructure partnership plans.
[ Read more of this story ]( https://slashdot.org/story/25/07/21/220229/softbank-and-open-ais-500-billion-ai-project-struggles-to-get-off-ground?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
# FCC To Eliminate Gigabit Speed Goal, Scrap Analysis of Broadband Prices
robot (spnet, 1) → All – 06:22:02 2025-07-22
FCC Chairman Brendan Carr is proposing (PDF) to roll back key Biden-era broadband policies, scrapping the long-term gigabit speed goal, halting analysis of broadband affordability, and reinterpreting deployment standards in a way that favors industry metrics over consumer access. The proposal, which is scheduled for a vote on August 7, narrows the scope of Section 706 evaluations to focus on whether broadband is being deployed rather than whether it's affordable or universally accessible. Ars Technica reports: The changes will make it easier for the FCC to give the broadband industry a passing grade in an annual progress report. FCC Chairman Brendan Carr's proposal would give the industry a thumbs-up even if it falls short of 100 percent deployment, eliminate a long-term goal of gigabit broadband speeds, and abandon a new effort to track the affordability of broadband.
Section 706 of the Telecommunications Act requires the FCC to determine whether broadband is being deployed "on a reasonable and timely basis" to all Americans. If the answer is no, the US law says the FCC must "take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market."
Generally, Democratic-led commissions have found that the industry isn't doing enough to make broadband universally available, while Republican-led commissions have found the opposite. Democratic-led commissions have also periodically increased the speeds used to determine whether advanced telecommunications capabilities are widely available, while Republican-led commissioners have kept the speed standards the same.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/21/2044200/fcc-to-eliminate-gigabit-speed-goal-scrap-analysis-of-broadband-prices?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 06:22:02 2025-07-22
FCC Chairman Brendan Carr is proposing (PDF) to roll back key Biden-era broadband policies, scrapping the long-term gigabit speed goal, halting analysis of broadband affordability, and reinterpreting deployment standards in a way that favors industry metrics over consumer access. The proposal, which is scheduled for a vote on August 7, narrows the scope of Section 706 evaluations to focus on whether broadband is being deployed rather than whether it's affordable or universally accessible. Ars Technica reports: The changes will make it easier for the FCC to give the broadband industry a passing grade in an annual progress report. FCC Chairman Brendan Carr's proposal would give the industry a thumbs-up even if it falls short of 100 percent deployment, eliminate a long-term goal of gigabit broadband speeds, and abandon a new effort to track the affordability of broadband.
Section 706 of the Telecommunications Act requires the FCC to determine whether broadband is being deployed "on a reasonable and timely basis" to all Americans. If the answer is no, the US law says the FCC must "take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market."
Generally, Democratic-led commissions have found that the industry isn't doing enough to make broadband universally available, while Republican-led commissions have found the opposite. Democratic-led commissions have also periodically increased the speeds used to determine whether advanced telecommunications capabilities are widely available, while Republican-led commissioners have kept the speed standards the same.
[ Read more of this story ]( https://tech.slashdot.org/story/25/07/21/2044200/fcc-to-eliminate-gigabit-speed-goal-scrap-analysis-of-broadband-prices?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.