![]() The notion that awareness could arise “from symbols and data processing using parametric functions in higher dimensions” was entirely “mystical,” according to one computer scientist. And it turned out that the most expedient way to do so was to stress that the model “understood” (if it could be said to understand at all) one thing and one thing only-numbers. Those in the field who did attempt to engage, who saw the story as an opportunity to educate a bewildered public, found themselves explaining in the simplest terms possible why a language model that speaks like a human was not in fact conscious. There was the fact that even the stories about these firings, despite occasionally making the cover of Wired or the front page of the New York Times tech section, had not succeeded in stoking public concern about AI. There was the fact that these enormous algorithms (LaMDA’s 137 billion parameters were stressed again and again) were being developed without guardrails or regulation, a problem that former Google researchers have called attention to only to be fired. There were, as these experts knew, legitimate issues with language models, and those issues had nothing to do with sentience but stemmed on the contrary from the fact that they were entirely unconscious, that they mindlessly parroted the racist, misogynistic, and homophobic language they’d absorbed from the internet data they’d been fed. There was the question of why a priest was working for Google in the first place, and more questions when it turned out that he did not in any way resemble a Christian priest but appeared during interviews in the usual hacker uniform of hoodies and rumpled flannels, and that his faith bent toward the mystic and gnostic extreme of the spectrum that finds commonalities with Wicca, and that the circumstances of his ordination were dubious, acquired quite possibly on the internet.Īs far as the machine-learning community was concerned, the story was a “distraction,” a term that no longer signals blithe dismissal but has become, within the zero-sum logic of the attention economy, an accusation of violence, of forcing other stories, other issues, not to exist. There were questions about whether that conversation did in fact violate Google’s confidentiality policies, the official reason Lemoine was fired. There was the question of what “sentience” would mean for an algorithm, and how it could be determined, and whether the conversation Lemoine and a collaborator conducted with LaMDA and posted on Medium-in which the algorithm claimed that it experienced complex emotions and feared death-had passed the Turing Test. Those who took the bait seemed to come away more confused, given the story’s many loose ends. ![]() Its many absurdities, including the fact that Lemoine (French for “the monk”) was not only a software engineer but a priest, and that he believed the algorithm was not only conscious but had a soul, appeared contrived to burn those last fumes of attention from a populace hollowed out by years of doomscrolling and news fatigue. It was all too easy to dismiss the Washington Post story about Blake Lemoine-the Google engineer who claimed this summer that his employer’s chatbot, LaMDA, was sentient-as an instance of clickbait, hype, and moral panic.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |