<![CDATA[ PCGamer ]]> https://www.pcgamer.com Fri, 25 Oct 2024 08:34:57 +0000 en <![CDATA[ Generative AI is dividing RPG fans: Can AI really play Dungeons & Dragons, and should it? ]]> In 2018, a group of grad students at Georgia Tech published a paper that noted games had often been "an important testbed for artificial intelligence" in the past. AIs have been beating us at chess, checkers, and Unreal Tournament for years, after all. Since that was the case, they argued, it makes sense for future testing of AI to focus on whether they could learn to play tabletop roleplaying games. Games like Dungeons & Dragons would serve as an effective measure of the progress of AI, "due to an infinite action space, multiple (collaborative) players and models of the world, and no explicit reward signal."

What they didn't predict was how controversial AI would become by the time that was possible—a topic we'll come back to later.

In 2019, OpenAI released GPT-2, a large language model trained on eight million web pages that could generate narrative responses to short prompts. Though it wasn't fully released until November, a partial version was available by February, and three months later had already been used to create the first version of AI Dungeon.

Though more of a text adventure than a roleplaying game, AI Dungeon was an early taste of what it would be like to have a computer for a Dungeon Master. While the results typically descend into the surreal if not outright nonsensical, it made a fine proof of concept. Players were soon trying to craft the perfect prompt to turn GPT-2 into a DM, whether by creating opuses over 1,000 words long, or crafting something that took less than 100.

Problems emerged with both approaches. The chatbot DM would often try to wrap up an entire combat in a single reply rather than letting you play it out blow-by-blow. It would forget what had happened if you played for too long, and was averse to roleplaying conversations. It would take more than just a well-written prompt to create an artificial DM.

Professionals stepped in, and now there are several alternatives to choose from, like Hidden Door, which promises to let you play a story in the world of Wizard of Oz, Call of Cthulhu, or The Crow thanks to its use of "a unique architecture" and officially licensed source material. The end result has been underwhelming in my experience—stories hop from one disconnected scene to another in a dreamlike fashion and actions flip-flop back and forth. In one Call of Cthulhu game I was chased by a cloaked figure who I managed to escape from, then be caught by, then escape from, then be caught by all while gaining and losing and gaining and losing a pocketwatch in a tedious process of back-and-forth indecision on the narrator's behalf.

Examining a wooden box in Hidden Door's Call of Cthulhu game. (Image credit: Hidden Door)

For a more traditional game of Dungeons & Dragons in digital form, there's Friends & Fables. Created by William Liu and David Melnychuk of indie game studio Side Quest Labs, Friends & Fables comes with character sheets, XP-tracking, an inventory, illustrated NPCs, and a narrator nicknamed "Franz" who acts as game master.

"It's not just one chatbot that you're talking to," Melnychuk tells me. "It's more like a system where there's one part of the AI GM that reasons about, like, is something new introduced? If there is, we need to save that to the game state—or do we need to update it? Things like that, it's a bunch of different modules that come together to create the final outcome that you see when you play."

The funny thing is, while Liu and Melnychuk needed to train their large-language models with examples so it knows when to add an item to the inventory (Friends & Fables mainly uses Llama), it already knew the rules of D&D 5th edition. "A lot of these large language-based models, they're actually trained on pretty much the entire internet," Melnychuk says. "They have a lot of just general knowledge about most things. That includes D&D and the SRD."

"The rule set, Wizards of the Coast has published it under the OGL and Creative Commons," Liu adds. "So it's on the internet for these language models to Hoover up."

OK Computer

Fantasy adventurers on a spaceship bridge

Adventurers try to fly a spaceship in Expedition to the Barrier Peaks. (Image credit: Wizards of the Coast)

When I play Friends & Fables, it certainly feels like a classic game of D&D. I start at an inn, with Franz describing several NPCs I could talk to. One of them has a lead on a magical artifact worth stealing, but wants to meet somewhere more private to discuss it. That leads to a journey into the tunnels beneath the city where a group of wizards have been meeting to conduct magical experiments on the sly. Along the way there are skill checks and back-and-forth dialogues with an insulting barbarian and a curious bard. It feels like exactly the kind of game of D&D an eager beginner might run.

One off-note is that Franz seems to be obsessed with the phrase "intricately carved wooden box," introducing more than one unconnected wooden box into the story with the same phrasing every time. When I bring this up Liu asks if I met anyone named "Elara", and I tell him I did learn about a goddess with that name. Turns out, that's another thing Franz keeps falling back on.

"The way that these language models work is that they're doing next-token prediction," he explains. "They're spitting out the sentence, and they're getting to a point—and a token is just like a word, or a string of words or characters—and so it's trying to guess, 'What is the most likely next one?' It's not that those are the only names that were in the training data, but that those are the ones that end up having the highest probability to be the next token."

Accepting a quest in Friends & Fables. (Image credit: Side Quest Labs)

Friends & Fables is still in beta, he points out, and "we're definitely working on fixing some of those things." Even with oddities like these—players on the Discord server have cataloged other Franz favorites, like windmills and hooded figures—my experience was more sensical than the amusing oddness AI Dungeon rapidly descends into.

"If we're being completely honest," Melnychuk says, "I think if AI Dungeon met our expectations of what we were expecting from an AI game we probably would have never built Friends & Fables."

Their motivation comes from a familiar story: the difficulty of getting an RPG group together in real life. "I have a couple of friends who play a lot of D&D," Liu says, "but they were deep in their own campaigns and I didn't feel like I could ask to just join after they're, like, three years deep. I also didn't really feel like just going out and finding a group for myself, of strangers."

Bardo is another AI GM—one that lets players upload their own rulebooks. (Image credit: Bard Studio)

It's a tale as old as D&D itself. Even now, with roleplaying at the height of its popularity, people struggle to find or maintain a campaign that suits them. "We get people that pop up in our Discord all the time who tell us, 'This is so great because I have two kids now, and don't have time to play,'" Liu says. "Or there's one guy who told us—he lives in Iraq, and there's just, like, nobody to play with there. He has a real hard time if he wants to play, so this was a solution for him. Just hearing people tell us that we're kind of solving this accessibility challenge for them is why we kept doing it."

Both are at pains to explain they're not interested in replacing human DMs, but rather in supplementing them. As well as providing a substitute for people who can't find a game, they want Friends & Fables to be able to help DMs run their own games.

"Think of it as a super-powered group chat," Liu says, "where you're the DM and you could send a message and say, like, 'Jody's character, roll a skill check.' Then a button on your phone pops up, and then you hit the skill check, right? That's definitely part of the future vision, but we're not there yet."

Ethics & Electronics

A horde of robots

Robots in the 5th edition update of Expedition to the Barrier Peaks. (Image credit: Wizards of the Coast)

Who hasn't, in a moment of writer's block, turned to an online fantasy name generator? If an AI helps me run a game, is it any different to recycling bits of a pre-written scenario or showing my players NPC art I found by doing a Google image search for the phrase "elf pirate"? I use third-party stuff when I roleplay because I don't have time to do everything—my players don't need to know how often I steal NPCs from Brennan Lee Mulligan—and it doesn't make the games I run any less satisfying.

Still, AI has been a hot-button topic in the pen-and-paper RPG community. D&D publisher Wizards of the Coast came under fire for using AI-generated art, and though it swore off it, the CEO of its owner Hasbro has been much more chipper about embracing AI. Hidden Door CEO Hilary Mason has spoken at length about the ethical problems that come with generative AI, telling PC Gamer last year that the company is committed to using ethically sourced training data and paying authors.

"We want writers to get paid," said Mason. "We see what we do as a way of giving writers access to the technology that takes the work they've already done—all of that world building, all of that imagining, all of that writing—and then gives them another way to share that with their fans where they get paid more for the work they've already done. We're really excited to work with writers. And personally, I know this is controversial, but I don't think AI is innately evil. I think what we're arguing about is who gets to benefit, and we really want to see the writers, the creators, benefit from it."

Character selection in Friends & Fables. (Image credit: Side Quest Labs)

To generate art for its characters and locations, Friends & Fables relies on Stable Diffusion—a deep-learning model that has been criticized for using a data set of publicly available images without the permission of their creators, which is why Getty Images filed a suit against it. "We totally sympathize with creators and artists who have gotten their data essentially trained on without permission from all these big AI companies," Liu says. "We don't think that's right either. Like, we think artists should be compensated."

He compares their use of Stable Diffusion to the way a DM running a home campaign might. "AI art generators unlock some really cool things there where you can't really hire a sketch artist to come to every campaign and draw things out for you on the fly," he says. "That's just not possible." Friends & Fables isn't just a home campaign, though. While you and one friend can play free for a while, eventually you'll run into the paywall.

Whether or not there's profit involved, generative AI's biggest critics won't touch it on the basis that it relies on stolen work, devalues real artists, and generates hollow mush. On the other side are people who enjoy refining prompts and finding the specialized use cases where AI can excel as an assistant rather than a replacement. The role of generative AI in RPGs is being explored and developed at dining room tables, tech startups, and big companies like Nvidia (which has been experimenting with conversation-holding NPCs), regardless of the ethical and legal battle lines having already been drawn, and both sides of the culture war using it as another way to holler at each as predictably as anything AI could come up with.

]]>
https://www.pcgamer.com/software/ai/generative-ai-is-dividing-rpg-fans-can-ai-really-play-dungeons-and-dragons-and-should-it AadySFBLSkjvkCZpETyziM Fri, 25 Oct 2024 01:36:30 +0000
<![CDATA[ Still cringing about your first internship? At least you didn't try to sabotage an AI project from one of China's biggest tech firms ]]> Work experience is often hard won, with internships just being one possible trial by fire. Even if you're not mucking up the office coffee order ("What do you mean it needs to be the colour of 'burnt almonds!?'"), you're still left feeling like you can never know enough. Well, rest assured whatever early career misstep you made, it can't be as bad as the story about this former intern who used to work for ByteDance, the owner of embattled short-form video app TikTok itself.

You may have already heard a rumour-mill version of this story, something along the lines of, 'intern injects malicious code into AI model, sabotaging 8000 GPUs and causing ByteDance to lose tens of millions of dollars.' Besides TikTok, ByteDance have also created Doubao, an incredibly popular AI chatbot Bloomberg called "China's answer to ChatGPT."

Though in a recent social media post ByteDance said that none of their commercial projects were affected by the rumoured sabotage. That said, their statement reveals that there's more to this story (via Ars Technica).

ByteDance have confirmed the intern was fired back in august for "serious disciplinary violations," including "maliciously [interfering] with the model training tasks" for at least one research project. ByteDance also says that the situation was serious enough to warrant reporting the intern's behaviour to their university, in addition to industry contacts besides that.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

According to ByteDance, the intern in question was part of the commercial technology team, not their AI lab. The implication of some available translations of ByteDance's statement suggest that the intern was part of an advertising team rather than a technical team, though commenters under the social post dispute the distinction, claiming that the commercial technology team was part of the AI lab.

Ultimately, ByteDance claims the intern misrepresented some details on their social media profile, and that a number of resulting reports have overstated what happened as a result. Those 8000 GPUs and alleged millions lost? The company says that this was "seriously exaggerated" but then don't elaborate on the actual figures beyond that. Maybe as a result of this ambiguity, commenters took umbrage with this too, accusing ByteDance of downplaying the damage done.

To summarise, a ByteDance intern—who has since been fired—did interfere with the company's AI model training. The company claims the damage is not as far-reaching as rumours suggest, though is unwilling to plainly state how consequential it really was, but still got the former intern's school involved in disciplinary action. And you thought your work experience was a disaster.

]]>
https://www.pcgamer.com/hardware/intern-sabotages-ai-at-bytedance cuSsVm5c5VHCeGaYKrqC6b Thu, 24 Oct 2024 16:23:51 +0000
<![CDATA[ Radio station uses AI to interview the ghost of a dead Nobel-winner with 3 quirky zoomers who don't exist, seems baffled people don't like it ]]> Ask yourself this: What could be dodgier than a radio station giving its human hosts the boot and replacing them with a cohort of three alarmingly photogenic Gen-Z AIs? If you answered 'having those three zoomer AIs interview another AI, this one imitating a Nobel prize-winning writer who died 12 years ago,' then congratulations, you may have a future ahead of you at Polish station Radio Kraków, which is in hot water for doing just that (via Onet.pl).

On Monday, Radio Kraków announced that it was overhauling its OFF station. Since 2015, the station had broadcast (all following quotes are machine-translated) "a playlist as well as original music programmes and a two-hour morning programme, in which the most time was devoted to cultural and social events in Kraków with the participation of Kraków artists and people associated with the Kraków club scene."

But it's 2024 and, apparently, that doesn't bring in the ears these days. In its place, declared the station, listeners would henceforth hear "the AI-created voices of three hosts—model representatives of Generation Z."

These would be 20-year-old Emilia Nowa, "a journalism student [and] pop culture expert," who is "passionately following the latest trends in the world of cinema, music and fashion"; 22-year-old Jakub Zieliński, who's studying Acoustic Engineering at AGH (a Kraków university); and rounding out the three was 23-year-old Alex, a former psychology student who is "socially engaged, passionately discussing topics related to identity [and] queer culture."

Posted by offradiokrakow on 

Except none of those people, their degrees, their disturbingly captivating AI-made portraits, or their interests were actually real, of course, because they were all robots.

The reaction from listeners was immediate and acrid. On the Facebook post announcing OFF radio's new direction, former fans left comments like "I wish you exactly the same—exclusively AI as listeners," and "It seems that the easiest person to replace is the manager who came up with it."

OFF radio's former human staff were none too pleased, either. In a separate Facebook post (via Notes From Poland), ex-OFF host Mateusz Demski lambasted Radio Kraków editor-in-chief Marcin Pulit for failing to note—in the AI show's glamorous announcement—that "a dozen or so people had lost their jobs just a moment earlier." Continuing, Demski writes that "This gentleman [Pulit] had the audacity to boast about his many years of journalistic experience. For me, in my opinion, this man has nothing to do with journalism."

Posted by matdemski on 

For his part, Pulit answered complaints about human job losses by claiming that "no employee of Radio Kraków was fired." Rather, OFF hosts were "external collaborators" whose contracts were terminated, and "not because of AI." Pulit said that a lot of OFF radio's content overlapped with programmes on other Radio Kraków stations, and that "the listenership range of OFF Radio Kraków was close to zero. This was the basis for the decision to make the change."

Demski then implored readers to sign a petition protesting the shift to AI and the threat it poses to journalism, which he had set up "a little in helplessness, a little in anger." It has received 16,000 signatures at the time of writing.

But wait, it gets worse. Like I said, the Gen-Z AI trio's gala debut consisted of an interview with none other than Wisława Szymborska, a world-renowned Polish poet and writer who won the 1996 Nobel prize in literature for, ironically enough, "poetry that with ironic precision allows the historical and biological context to come to light in fragments of human reality."

The problem with that, of course, is that Szymborska has been dead for 12 years—she died of lung cancer in 2012. The subject that Emilia, Jakub, and Alex interview was, herself, and AI construct proffering opinions on all manner of things, including "Korean literature and this year's Nobel Laureate Han Kang." You can hear the interview as it aired in this Instagram post.

You can find a (again, machine-translated) transcript of the chat here, including Szymborska enthusing about Pedro Pascal endorsing the work of Polish writer Olga Tokarczuk on Instagram a few weeks ago.

The reaction was pretty much as you'd expect. "After firing employees, you could at least retain a minimum of dignity" wrote one Polish listener on Facebook. "We are witnessing a historical self-immolation," wrote another. "I wish from the bottom of my heart many lost lawsuits, massive financial penalties and a clumsy bankruptcy," judged another.

And, if you couldn't tell, I can't help but share the sentiment. Deploying a gaggle of AI simulacra and hoping they capture the voice of Gen Z is not journalism, and getting them to tackle topics as sensitive and fraught as, for instance, "queer culture" seems like both inviting catastrophe and disrespecting the actual human beings to whom these subjects truly matter. That's to say nothing of thoughtlessly tampering with the memories of the dead with the AI resurrection of figures like Szymborska. Many have asked, but I could not find an answer from the station as to whether it secured the permission of Szymborska's heirs before pulling its stunt.

Will it continue? The executive class has never been one to let a labour-saving technology slip through its fingers, but the response to this one has been disastrous. As far as Radio Kraków's editor-in-chief is concerned, though, "We are testing the possibilities and limitations of this technology at the current stage of its development."

]]>
https://www.pcgamer.com/software/ai/radio-station-uses-ai-to-interview-the-ghost-of-a-dead-nobel-winner-with-3-quirky-zoomers-who-dont-exist-seems-baffled-people-dont-like-it edGGGRJKEkXvY6NCFerNV9 Wed, 23 Oct 2024 11:56:59 +0000
<![CDATA[ A new method to circumvent Windows 11's 'annoying' system requirements just came out ]]> Windows 11 has been adopted a little slowly, leaving many on the very popular Windows 10 to this day. Part of this is that many think "It ain't broke, don't fix it" and don't particularly like all the ways that Windows 11 tries to sell you on the likes of One Drive and Xbox Game Pass. Another is that Windows 11 had some pretty tough system requirements. Though thankfully there are many ways around those, including this newly-published tool.

A free tool called Rufus has been a good workaround for a long time to ignore system requirements, but a new method has just been made available that might be worth a try.

Flyby11 is the name of a new program you can find on GitHub (as spotted by Neowin) and it simply "removes the annoying restrictions preventing you from installing Windows 11 (24H2) on unsupported hardware". Windows 11 version 24H2 is the most recent version of the software currently available outside of a beta.

Cheekily described as "sneaking through the back door without anyone noticing", Flyby11 uses the Windows Server installation system to skip hardware compatibility checks. You simply have to run the tool and it will get Windows 11 working for you, regardless of what hardware you have in your machine.

However, it's worth pointing out that this is a very new method and the GitHub repository was created on October 19. According to community feedback, no significant issues have popped up with this method, though one user says "No doubt this coder will get a lot of feedback and it will get polished, until then I think I like Rufus."

The Rufus Windows 11 method has been around a good bit longer, and is even recommended by Microsoft (though it only recommends it for compatible rigs)

With there only being a year of support left for Windows 10, it is well worth making the upgrade to Windows 11 now. After that, it can become a bit of a security problem to stay on Windows 10 as the platform won't get security updates from Microsoft itself to handle new viruses, and Windows Defender won't stay up-to-date for very long.

When support ends for Windows 10, you can still keep using your current version of the software, but any bugs you spot will be there forever. And security is a huge concern. So best to upgrade instead. The good news is, if system requirements were holding you back, you now have various ways around that.


Windows 11 review: What we think of the latest OS.
How to install Windows 11: Guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

]]>
https://www.pcgamer.com/software/windows/a-new-method-to-circumvent-windows-11s-annoying-system-requirements-just-came-out nK6wUnQqBTF9wmhnhpBurA Tue, 22 Oct 2024 16:17:10 +0000
<![CDATA[ Five new Steam games you probably missed (October 21, 2024) ]]>
Best of the best

Baldur's Gate 3 - Jaheira with a glowing green sword looks ready for battle

(Image credit: Larian Studios)

2024 games: Upcoming releases
Best PC games: All-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best MMOs: Massive worlds
Best RPGs: Grand adventures

On an average day about a dozen new games are released on Steam. And while we think that's a good thing, it can be understandably hard to keep up with. Potentially exciting gems are sure to be lost in the deluge of new things to play unless you sort through every single game that is released on Steam. So that’s exactly what we’ve done. If nothing catches your fancy this week, we've gathered the best PC games you can play right now and a running list of the 2024 games that are launching this year.

Killing Time: Resurrected

Steam ‌page‌
Release:‌ October 18
Developer:‌ Nightdive Studios, 3DO

Until recently I hadn't heard of this 1995 Doom clone, probably because it released as a 3DO exclusive before making the jump to PC the following year (cruelly, the year of Quake and Duke Nukem 3D). It doesn't look particularly good at first glance but there are some interesting details: it has live action cutscenes, and sometimes live action characters will appear in the game world itself, which was definitely unique for the time. The 1930s horror setting is certainly rare, and in keeping with the horror theme the game relies more on puzzles than the usual '90s FPS key hunt. Nightdive's treatment looks to make it actually fun to play in 2024: you can crank the framerate up to 144 fps, and there's full 360 degree mouse look.

Sniper Killer

Steam‌ ‌page‌
Release:‌ October 17
Developer:‌ Black Eyed Priest, Henry Hoare

Kinda like the Sniper Elite series I guess, but instead of virtuously slaying nazis you're sniping innocent people. Unsurprisingly, it comes from famed horror publisher Torture Star Video. Its Nintendo 64 first-person shooter art style reimagines Goldeneye or Perfect Dark as misanthropic cult games that would have been banned in Australia. An interesting touch: in addition to the creepy snipings, you also get to play in the role of Detective Combardy, who's trying to find the killer sniper and figure out their motive. Looks pretty screwed up overall and that's no surprise: it's by the same developer responsible for Bloodwash.

Drova

Steam‌ ‌page‌
Release:‌ October 16
Developers:‌ Just2D

I've heard lots of good things about Drova, a top-down pixel art action RPG with a focus on exploration and player choice. It draws from the "mystical allure of Celtic mythology" but follows the footsteps of Fallout: New Vegas, in the sense that you're no hero or Chosen One, but instead some random tasked with surviving in a gruelling (though also quite beautiful) world. The open world is handcrafted, the story branches according to your actions and decisions, and there are duelling factions you can side with. One thing I've heard repeated about Drova: it doesn't hold your hand, and it's hugely quite ended, so expect that pleasant / infuriating confusion that comes with less heavily scripted RPGs.

Eden Crafters

Steam‌ ‌page‌
Release:‌ October 17
Developer:‌ Osaris Games

Launched into Early Access last week, Eden Crafters is a survival game about transforming uninhabitable planets into habitable ones. That means altering the climate, and one way to do that is with elaborate machines, hence Eden Crafters' big focus on automation. There's also terraforming, crafting, familiar survival loops and, eventually, vehicles. It looks like a collision of a bunch of different things survival enthusiasts love, and unlike most survival games the Steam reviews are pretty positive at this early stage in development. Eden Crafters will likely hit 1.0 in a year with more "planets, biomes and automated systems", among other things.

Yugo: the non-game

Yugo: the non-game

(Image credit: In Two Minds Studio)

Steam‌ ‌page‌
Release:‌ October 16
Developer:‌ In Two Minds Studio

A cool little game about driving through moody landscapes inspired by the Balkans, specifically the roads connecting Kosovo and Montenegro. The "non-game" in the title refers to the lack of objectives: the whole point of Yugo is to take virtual road trips with friends (or strangers, if you like) while chatting or sitting in awkward / comfortable silence. It's called Yugo because not only did Kosovo and Montenegro fall within the former Yugoslavia, but you'll be driving a vehicle based on the Yugo; someone wrote a book about it being the worst car of all time. Adding to the dreamlike atmosphere are the Spomenik-like structures dotted through the minimalist hilly landscapes.

]]>
https://www.pcgamer.com/software/platforms/five-new-steam-games-you-probably-missed-october-21-2024 ysrWHe8vLrPJ42t3n4LVTA Mon, 21 Oct 2024 00:19:39 +0000
<![CDATA[ I fed Google's new notebook summarisation feature my article about the potential dangers of AI scraping and it's as creepy and self-aware as you would think ]]> Yes, I know it's a bit hypocritical to be critical of AI, specifically generative AI, and use it like some sort of sick party trick. However, I'm a journalist and this is like doing science, kinda. Google Notebook LM, Google's AI summarization system, has a new feature to guide its audio summarization feature, with a focus on certain topics and sources, and it's both quite smart and sort of haunting.

Announced today, Google Labs, the search company's site for AI tools, has implemented this latest addition and users can test it out for themselves. I wanted to give it a piece of information that is somewhat nuanced yet I know quite well so it's hard to find a better choice than something you've written.

I handed it a piece I had written earlier today, which is critical of opt-out policies when it comes to AI data scraping, and watched two hosts summarize it for the point of helping take notes. Apart from calling opt-out the "Opt O U T" model, it kinda nails it.

The two AI hosts manage to get to my basic opinion in a roundabout way and appear like they're earnestly and level-headedly criticising the thing that made them exist in the first place (data scraping). It then goes on to argue that users should be more proactive about their data use and that all hope isn't lost in the AI data war.

In the second interpretation of the same article, I asked the AI hosts to focus a bit more on Elon Musk and his controversies, just to see how far outside of my article it would go.

Apart from a little ire at Musk's name, it continues to focus on the same basic point, and even makes mistakes in speech patterns, like saying X, then calling it Twitter. It fits "ums" and "ahs" in every now and then, which is surprisingly lifelike.

We noticed many of these same things when testing out the podcast function earlier this month but the Notebook function is a step above as you can ask it follow-up questions around the article. I asked it for the basic arguments in my piece and it gave a succinct four-point answer, going over a few rationales to be critical of data scraping, and specifically the problems with opt-out policies.

When repeated a second time, I caught a few similarities, like the male host calling AI companies sneaky in both versions. The female host also says "The future is shaped by the now" in some fashion twice on the second attempt.

However, the confidence with which the hosts speak feels worrisome to me. There's a feedback loop here, where, at a moment's notice, you can have a professional-sounding host, telling you "the truth" through a source you've shared. In the case of my article, my argument, whether you agree or not, is relatively straightforward.

It gets some small bits of information wrong, like saying company owners have to opt-out, when it's actually users, but it's mostly on the money. How does something like this prepare a potential reader for something deeper and more philosophical?

And, as a result, what makes writers keep writing when their work can be summarized by two very friendly voices who can position the information in a way the reader desires? Fundamentally, our ability to understand the words in front of us requires much greater skills than the ability to read an AI's summary, and language is so multifaceted that we shouldn't trust it to get it right.

Like I said at the start, this feels like a spooky party trick but language feels so much bigger than an LLM, however large, can really understand.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/software/ai/i-fed-googles-new-notebook-summarisation-feature-my-article-about-the-potential-dangers-of-ai-scraping-and-its-as-creepy-and-self-aware-as-you-would-think NSPWpR65AoUhGgYkV6niZW Fri, 18 Oct 2024 16:47:39 +0000
<![CDATA[ The AI opt-out models Meta, Musk's X, and the UK gov are proposing are simply not a good enough way for us to protect ourselves from data scraping ]]> Nobody likes the idea of what major social media companies do with their information but as trends like "Goodbye Meta AI" suggest, people are even more worried about what AI scrapers, specifically, are doing with their data. Proposed changes by the UK government and Elon Musk's X, like Meta before them, could end up being pretty tedious to opt out of, if not downright obfuscated.

Starting with X, as spotted by Tech Crunch, a recent change made to the privacy policy of the social media site says it may share your data with third parties. If you don't opt out of this data sharing, it can be used to train AI models, "whether generative or otherwise". You can opt out of this by going into 'Settings', then 'Data Sharing and Personalization', and then turning off data sharing.

This is turned on by default and you are not warned of such upon creating an account. However, making an account does entitle the social media site to harvest data used on it, so it doesn't appear to be under any obligation to do so. Not only does X have its own AI model and chatbot named Grok, but your data could, and likely has been, used to train AI models from other sites.

In a very similar story, as reported by the Financial Times, the UK government is currently consulting on a proposal that would allow companies to train AI models on data scraped from websites unless its users choose to opt out.

This is frankly not a good enough way to allow consumers to be fully educated in how and why their data is used. Giving users the ability to opt out of AI scraping in some fairly obscure part of an app's settings won't give the majority of users enough information to know how their data is used and that they can even opt out in the first place.

An opt-in model would work much better here, where users can choose to allow their data to be scraped by AI if they so choose. However, it's hard to believe enough users would do so to satiate the data desires of current AI models and their owners.

If site owners are unhappy with an opt-in model, that implies owners are aware many would not choose to have their data scraped, and this touches on part of the problem with opt-out policies. It feels like an appeasement to those in the know but not a good enough tool to alert the average consumer about their own data rights.

AI scraping specifically, is a very new thing for many users and sites, and it feels like many of the current models are built out of a sort of Wild West, frontier approach which was there at the very start of generative artificial intelligence. AI companies have been bypassing copyright and acting in ethically ambiguous ways to get data for some time, so consumers need to be more proactive about their privacy than ever right now.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/software/ai/the-ai-opt-out-models-meta-musks-x-and-the-uk-gov-are-proposing-are-simply-not-a-good-enough-way-for-us-to-protect-ourselves-from-data-scraping QHzmreuDn655JFBhPNmZY4 Fri, 18 Oct 2024 13:33:15 +0000
<![CDATA[ There may be a use for the Copilot key after all, but not quite yet—Microsoft is toying with the idea of allowing us to change what it opens ]]> The age of the AI PC is here, and with it lots of unnecessary fluff. Fluff, that is, such as the outrageous ditching of a perfectly respectable right-Ctrl (Menu) key in favour of a shiny new Copilot key. Apart from signalling to others nearby that you have a snazzy new computer, by default this key does nothing other than open the Copilot AI assistant. Well, that was the case.

It looks like we might soon get some actual use out of the Copilot key on our keyboards, because Microsoft is currently toying with the idea of allowing us to remap it to open other applications. It originally seemed like this was rolling out in testing right now for the Windows 11 Preview Build 22631.4387, but Microsoft has clarified that "this feature will roll out to Insiders in Release Preview on Windows 11, version 23H2 at a later date and is not rolling out yet with this update".

It's not quite as straightforward as this, however, because the original (now-struckthrough) text stated that you can make the key open a different app, but only those "in a signed MSIX package" which "ensures that the app meets security and privacy standards to keep you safe".

MSIX apps use a new packaging standard that's supposed to be more secure than the previous EXE and MSI ones, but such apps are currently few and far between. Still, a few is better than zero, right? Especially when the alternative is to have an entire physical key dedicated to opening an AI assistant.

The creation and addition of the Copilot key caused somewhat of a stir when it was unveiled back in January, in large part because it was the first time in almost 30 years that a button had been added. (Of course, we should really say "replaced" rather than "added".)

Early in the year it was also deemed a requirement for a PC to be considered an AI PC. You know, apart from all the powerful NPU stuff that actually matters. Since then, there's been less talk of the key requirement, but the damn things are still there on all these new AI PCs, so we'd better get used to them.

If they're here to stay, then I suppose making them remappable might be enough of a spoonful of sugar to make the poiso- *cough* medicine go down. Not necessarily in the most delightful way, but it's better than nothing. Fingers crossed this does actually get pushed through testing soon and more apps become compatible. 

]]>
https://www.pcgamer.com/software/ai/there-may-be-a-use-for-the-copilot-key-after-all-but-not-quite-yet-microsoft-is-toying-with-the-idea-of-allowing-us-to-change-what-it-opens VSjY6g6TWEsytu7fpT9kGW Wed, 16 Oct 2024 16:40:43 +0000
<![CDATA[ Garry from Garry's mod finally gets the ultra-rare achievement for playing with Garry ]]> Garry's Mod is a physics sandbox created by Garry Newman and Facepunch Studios way back in 2004, and in the two decades since release it has spawned countless thousands of mods, memes and gamemodes. Back in 2019 PC Gamer spoke to Garry Newman about the first 15 years, and asked about the most-requested feature from players:

"The biggest thing, by about a million miles, is the 'Played with Garry' achievement," says Newman. "It’s one of the hardest achievements to get on Steam—for obvious reasons."

The achievement is called "Yes, I am the real Garry!" and requires players to "play on the same server as Garry" himself. If you look at Gmod's achievements 2.8% of players have managed to bag this, which still seems relatively high, but bear in mind that many frustrated players have resorted to spoofing encounters or other nefarious means: Because it's hard to play with Garry. 

And in theory, it should be impossible for Garry to play with Garry. Yet Garry Newman of Garry's mod has now achieved "Yes, I am the real Garry" and Steam popped the achievement.

"Finally got this," says Newman, posting a screenshot of the achievement.

The top reply from Yomi is, inevitably, "can you help me get it Garry?" Jaiydanimate is almost plaintive: "u should just give it to everyone at this point bro I been tryin my darndest since 2009."

"That genuinely made me mad," says Tlacitel on the gmod subreddit. "Wait there is more than one Garry?" asks Volt-Off, leading to the inevitable reply from Kgamer404: "Always has been."

There's plenty of speculation about how exactly this was awarded at this point, and I have absolutely no idea. I dropped Newman a line to ask if he was any the wiser, and will update with any response.

Newman and Facepunch continue to work on Rust, which is in incredibly rude health, and the game-slash-platform eventually intended as the successor to Garry's mod, s&box. The idea with the latter is not just improving everything, but giving creators a distribution method and way to monetise their creations that avoids traditional industry fees and inconveniences. s&box is currently available to developers in a preview version.

]]>
https://www.pcgamer.com/software/platforms/garry-from-garrys-mod-finally-gets-the-ultra-rare-achievement-for-playing-with-garry hjYyYcLicy5AWiWsdQitMk Tue, 15 Oct 2024 17:35:55 +0000
<![CDATA[ Will AI grow so powerful it can threaten us? Chief Meta scientist says: 'Pardon my French, but that’s complete B.S.' Which is the attitude I would expect from any company with so much skin in the AI game ]]> AI is a worrisome subject. From the potential of LLMs misinforming people en masse through chatbots or generative AI posing a risk to the future of many great artists' careers, all the way to Skynet taking over and conquering humanity. Alright, one of those is far less likely to happen than the others but it's still a debate many are having, and a chief AI scientist at Meta reckons we all have nothing to worry about. 

In a new report for The Wall Street Journal, Meta Chief AI scientist Yann LeCun was asked if humans should be afraid of the future of AI. To which he said: "You’re going to have to pardon my French, but that’s complete B.S." 

LeCun has an impressive resume in AI, winning a Turing Award in 2018 for his work in deep learning. He has since been proclaimed as one of the "godfathers of AI", as pointed out in the original report. That is to say, he has tonnes of experience in the field. 

He told the reporter above that AI is dumber than a cat, and this mimics a similar report from Apple scientists on the limitations of LLMs (Large Language Models) reported recently. This report suggests that LLMs can't reason as humans do, and shows a "critical flaw in LLMs' ability to genuinely understand mathematical concepts and discern relevant information for problem-solving."

This is not LeCun's first time publicly dissuading ire and fear around AI. In May this year, he had a spat with Elon Musk, where LeCun not only doubted Musk's promises around xAI (X's own AI) but also Musk's politics. When challenged on his credentials by Musk, Lecun produced 80 technical papers he has published over the previous two years, and Musk told him "That’s nothing, you’re going soft. Try harder!". 

Understandably, Musk's arrogance led to much support for LeCun, as can be shown in the likes to his original tweets. 

Meta has been exploring its own AI and users have been expressing their distrust through the "Goodbye Meta AI" chain mail. Meta is currently involved in many types of AI use, so comparing its intelligence to humans could be potentially missing some of the arguments people are making against it. 

It's important to note that users' worries about AI don't just rely on fears of Skynet and AI surpassing human intelligence. Much of the fear around it is cultural, political, legal, and artistic. It doesn't necessarily matter how good or bad AI is in a technical sense if it replaces human art and creativity, and it doesn't need to be smarter than a housecat to do that.

Hey, even a housecat can knock your computer off your desk.

]]>
https://www.pcgamer.com/software/ai/will-ai-grow-so-powerful-it-can-threaten-us-chief-meta-scientist-says-pardon-my-french-but-thats-complete-b-s-which-is-the-attitude-i-would-expect-from-any-company-with-so-much-skin-in-the-ai-game fVjP9LBJ3canEWJ5qC8kbD Tue, 15 Oct 2024 15:20:34 +0000