- Leadership lessons from Star Trek: TNG.
- Facebook Destroys Everything.
- Clothes With History.
.
Books. Bikes. Boomsticks.
“I only regret that I have but one face to palm for my country.”
Tech giants are signing deals with nuclear power companies to supply energy to their data centers. Many of these deals revolve around unproven Small Modular Reactors (SMRs). This is a relatively new term for most people, so I figured it as worth it to dig in a little to help it make sense.
First to get the hysteria out of the way. No, Microsoft and Google will NOT be running nuclear reactors. No, AI will not be running nuclear reactors. These SMRs will be operated by highly trained operators licensed by the NRC, just like I was on my reactor. Every licensed operator is personally responsible for ensuring nuclear safety. Like go to prison personally responsible. It doesn’t matter who’s name is on the front gate.
The code that runs Redbox DVD rental machines has been dumped online, and, in the wake of the company’s bankruptcy, a community of tinkerers and reverse engineers are probing the operating system to learn how it works. Naturally, one of the first things people did was make one of the machines run Doom.In case you think I'm kidding, here's a dude playing Doom on an old Kodak point & shoot camera.
As has been detailed in several great articles elsewhere, the end of Redbox has been a clusterfuck, with pharmacies, grocery stores, and other retailers stuck with very large, heavy, abandoned DVD rental kiosks.
So even if the engineers manage to improve an already near-perfect model, the new camera will not really be all that different.
That is exactly what happened in case of the Nikon D6: Like the D5, it still is a lighting-fast camera, it still has incredible high ISO performance and it still is indestructible. On top of that, the autofocus was improved significantly, but no one really took note of that, as the D5 was already practically infallible.
It is important to understand that major breakthroughs in physical engineering are far less likely to happen and if they do happen, they often go relatively unnoticed.
Always keep the Pareto-principle in mind: An improvement of 20% requires 80% of total development resources and is neither needed nor noticed by 80% of all users.
“NEO can enter a potentially dangerous environment to provide video and audio feedback to the officers before entry and allow them to communicate with those in that environment,” Huffman said, according to the transcript. “NEO carries an onboard computer and antenna array that will allow officers the ability to create a ‘denial-of-service’ (DDoS) event to disable ‘Internet of Things’ devices that could potentially cause harm while entry is made.”At FLETC they even have a training house set up with various web-enabled devices like crib monitors and "nanny cams" so the Feds can practice working in that environment for entries, which makes sense, I guess. Wonder if they have a practice claymore roomba?
DDoS attacks are a type of cyber attack where a website, server, or network is overloaded with traffic until it is knocked offline. Huffman did not provide any specifics about how a DDoS attack like this would work. But he said DHS wanted to develop this capability after a 2021 incident in which a man suspected of child sexual abuse crimes in Florida used his doorbell camera to see that he was being raided by the FBI and began shooting at them, killing two FBI agents and injuring three others.
In October, The Verge and other outlets reported on product review articles appearing on Gannett publications like USA Today that seemed to be AI-generated. Gannett maintained that the content was produced by humans and that a third-party marketing firm was responsible. Just a month later, eerily similar review articles were published on the website of Sports Illustrated, but this time, Futurism discovered that the article authors’ headshots were for sale on an AI photo website. Shortly after, Sports Illustrated said it had cut ties with the company that produced the reviews.But sneaking content onto sites is one thing. Suppose you could instead just buy the legal remains of a once-respected but now-defunct site and reanimate it, including generating new AI-written articles under the bylines of its former authors?
The apparent AI content proved embarrassing for nearly everyone involved: venerated publications that hired a third-party marketing firm to produce content were now attempting to defend the work — and themselves — after readers discovered the low-quality junk content on their sites. Workers who had nothing to do with the stories feared it could be the beginning of the end of their jobs. In January, the Sports Illustrated newsroom was gutted by mass layoffs, though much of the staff was later rehired after its parent company found a new publisher.
In both cases, as reported by The Verge, the AI-generated content was produced by a mysterious company called AdVon Commerce, a marketing firm that boasts of its AI-powered products. There’s little information available about AdVon online, as its owners have worked to scrub their names from the internet.
In a post on X (formerly Twitter), Musk wrote that X "has no choice but to file suit against the perpetrators and collaborators" behind an advertiser boycott on his platform.Boycotts are free speech too, Elmo you knob.
"Hopefully, some states will consider criminal prosecution," Musk wrote, leading several X users to suggest that Musk wants it to be illegal for brands to refuse to advertise on X.
Microsoft maintains Recall is an optional experience and that it has built privacy controls into the feature. You can disable certain URLs and apps, and Recall won’t store any material that’s protected with digital rights management tools. “Recall also does not take snapshots of certain kinds of content, including InPrivate web browsing sessions in Microsoft Edge, Firefox, Opera, Google Chrome, or other Chromium-based browsers,” says Microsoft on its explainer FAQ page.I have no clue who thought this was a good idea.
However, Recall doesn’t perform content moderation, so it won’t hide information like passwords or financial account numbers in its screenshots. “That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry,” warns Microsoft.
Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media.
Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people's lives.
The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally.