Jump to content
Seriously No Politics ×

The New Apocalypse: AI


Recommended Posts

On 4/3/2023 at 7:00 AM, bsjkki said:

 

Just because AI isn't sentient or does not act purposefully does not mean, of course, that AI is not capable of doing great harm.  And AI's ability to use a built-in "Who, me? :unknw:" defense (or, of course, better said, the possibility/probability of others invoking such a defense "in AI's behalf," as it were) makes such harm worse rather than making it better.  As someone who has been the victim of defamation myself, I don't see this as any laughing matter.  Let's take a closer look at the incident involving Professor Turley:

Here is a link to reporting on the incident from The Washington Post, under the by-lines of Pranshu Verema and Will Oremus (so, presumably, this article is trustworthy and wasn't written ... "composed" or "generated" might be better words ... by AI).

https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

The tag line of the article says, "The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence."  For the moment, let's bracket the fact that AI lacks consciousness and, thus, cannot tell knowing falsehoods.  Contrary to the tag line, it doesn't simply "misrepresent key facts."  It pulls them out of thin air, carves them out of whole cloth.  As the link states, if a human were to do what AI does, we could say that the human lies.   

Quote

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.

Perhaps the problem is with the word "generate."  Perhaps the AI, when asked to "generate" something, doesn't know the difference between pulling something from information that exists already and creating the information.

Quote

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

And who made the accusation?  The AI made the accusation.  The person who is alleged to have made the accusation, and the article in which the alleged incident is reported, does not exist.

Quote

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

Indeed, how does one "correct the record" when "the record" is carved out of whole cloth?

Quote

“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”

Yes, quite chilling and incredibly harmful.  Apt descriptors, those.

Quote

Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be a

Quote

 

n engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.

As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bing and Google’s Bard begins to be incorporated across the web, its propensity to generate potentially damaging falsehoods raises concerns about the spread of misinformation — and novel questions about who’s responsible when chatbots mislead.

Indeed.  If we think misinformation is a problem now, something tells me that we ain't seen nothin' yet.

Quote

“Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods,” said Kate Crawford, a professor at the University of Southern California at Annenberg and senior principal researcher at Microsoft Research.

And an Achilles heel of too many humans is that they tend to conflate the confidence with which a response is delivered with the accuracy of that response.  Already, are we seeing "appeals to authority" in which AI is the "authority"?  I don't know, but it wouldn't surprise me.  (Perhaps I'm simply too cynical. ;))

Quote

In a statement, OpenAI spokesperson Niko Felix said, “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”

Whew!  I'm sure Professor takes great comfort in Mr. Felix's assertion that that OpenAI's powers-that-be are "making progress" in "improving factual accuracy." <_< :blink: :rolleyes:  One hopes that a good deal of such progress can occur before Professor Turley's entire reputation has been reduced to rubble.

Quote

 

Today’s AI chatbots work by drawing on vast pools of online content, often scraped from sources such as Wikipedia and Reddit, to stitch together plausible-sounding responses to almost any question. They’re trained to identify patterns of words and ideas to stay on topic as they generate sentences, paragraphs and even whole essays that may resemble material published online.

 

"Garbage In, Garbage Out" is not a new concept in computing.  One of the good things about Wikipedia and Reddit (and so forth, potentially ad infinitum) is that anyone may contribute.  One of the bad things about Wikipedia, Reddit, and so forth, is that any fool may contribute.  It would seem that, like many, many other things, OpenAI/ChatGPT is as good as the information it gets.

The writers go on to note that

Quote

just because they’re good at predicting which words are likely to appear together doesn’t mean the resulting sentences are always true; the Princeton University computer science professor Arvind Narayanan has called ChatGPT a "bull$h!t generator." 

Also, the authors note, "sounding authoritative," on the one hand, and actually being reliable, on the other, are two completely different things.

The article goes on to note that a regional mayor in Australia is threatening to sue OpenAI for defamation after the program made an accusation that he served time in prison for bribery.  It will be interesting to see how that case plays out.

In another case, a journalist used ChatGPT to research sources, and the program returned an actual source along with purported citations to that source's work ... yet all of those citations were fake.  That source apparently has coined the neologism "hallucitations" to describe them.

One thing that scares me is that ChatGPT didn't come up with the idea of fabricating sources on its own.  Apparently, it sees such behavior enough as it scours the Internet that, in the amoral calculus of its algorithms, such a thing is considered acceptable.

The whole article is worth a read, and is worthy of more than mere passing consideration.  It should, I think, give any reasonable reader a good deal of pause.

 

Edited by Kenngo1969
Link to comment
18 minutes ago, Kenngo1969 said:

The whole article is worth a read, and is worthy of more than mere passing consideration.  It should, I think, give any reasonable reader a good deal of pause.

Yes.  The elusive "reasonable reader".   It seems like the percentage of readers to reasonable ones, has always been on the abysmal side of embarrassing.  IMO, lies, bias, falseness masquerading as truth, has been with us for quite a long time.  Also, people who are actively engaged in pulling truth from sources that are obviously not sources of truth (like the snarky teenagers at 4chan, and astrology, and dream interpretation, and the billion versions of Nigerian prince email scammers,  and satire sites, and whatnot) have also been with us for quite a long time.

You can take a heck of a lot of the ominous overtones out of the discussion, if you just have enough common sense to not fall for everything you see.  It's really not rocket science - just pick one of these:

 

how-to-spot-fake-news-IFLA.jpg

CRAAP_image.jpeg

Fact+Checking+for+Fake+News.jpg

 

Link to comment
2 hours ago, Calm said:

What do you mean by defense?

AI, of course, has no capacity for malice, conscious ignorance, and so on.  No, these things (and other, similar things) are beyond AI's capacity, but that doesn't stop others from invoking them in AI's "defense," as it were.

Link to comment
5 hours ago, Kenngo1969 said:

The tag line of the article says, "The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence."  For the moment, let's bracket the fact that AI lacks consciousness and, thus, cannot tell knowing falsehoods.  Contrary to the tag line, it doesn't simply "misrepresent key facts."  It pulls them out of thin air, carves them out of whole cloth.  As the link states, if a human were to do what AI does, we could say that the human lies.   

As a society we don’t shun human liars. Why would a few more liars that lie because they are mimicking human liars make things worse? I mean, we already have the bot farms run by Russia showering us with lies so purely digital liars aren’t new either.

Link to comment
5 hours ago, LoudmouthMormon said:

Yes.  The elusive "reasonable reader".   It seems like the percentage of readers to reasonable ones, has always been on the abysmal side of embarrassing.  IMO, lies, bias, falseness masquerading as truth, has been with us for quite a long time.  Also, people who are actively engaged in pulling truth from sources that are obviously not sources of truth (like the snarky teenagers at 4chan, and astrology, and dream interpretation, and the billion versions of Nigerian prince email scammers,  and satire sites, and whatnot) have also been with us for quite a long time.

You can take a heck of a lot of the ominous overtones out of the discussion, if you just have enough common sense to not fall for everything you see.  It's really not rocket science - just pick one of these:

 

how-to-spot-fake-news-IFLA.jpg

CRAAP_image.jpeg

Fact+Checking+for+Fake+News.jpg

 

I don't disagree with anything you say here.  Perhaps most worrisome to me, however, is that ChatGPT/OpenAI itself  has no ability to engage in such reasoning.

Edited by Kenngo1969
Link to comment
9 minutes ago, The Nehor said:

As a society we don’t shun human liars. Why would a few more liars that lie because they are mimicking human liars make things worse? I mean, we already have the bot farms run by Russia showering us with lies so purely digital liars aren’t new either.

I suppose you are right to imply that the source of a lie (that is, whether a lie originates with a human or a machine) is irrelevant, or that a lie, the source of which is a machine is no worse than a lie that originates with a human.  However, at least there is the possibility of engaging in moral reasoning with a human and persuading that human of the error of his or her ways, while no such possibility exists with a machine.

Link to comment
26 minutes ago, Kenngo1969 said:

I suppose you are right to imply that the source of a lie (that is, whether a lie originates with a human or a machine) is irrelevant, or that a lie, the source of which is a machine is no worse than a lie that originates with a human.  However, at least there is the possibility of engaging in moral reasoning with a human and persuading that human of the error of his or her ways, while no such possibility exists with a machine.

On the other hand someone can utterly destroy the bots without any moral difficulties. Do that to human liars and they start screaming a lot about how unfair it is that I am feeding them to crocodiles. So much whining…….

Link to comment
16 hours ago, Kenngo1969 said:

As the link states, if a human were to do what AI does, we could say that the human lies.   

But this is a misconception.

Lying does not mean telling a falsehood. Lying means saying something that you do not believe to be true.

The example I like to use is of Johnny, who is finishing his homework while eating breakfast. The bus shows up, and he runs out the door, and leaves his homework on the table. His dog, smelling the bacon grease that dripped on the homework, pulls it off the table and eats it. Meanwhile, Johnny, not wanting to admit that he left his homework on the table, tells his teacher that his dog ate it. Johnny has both made a factually accurate statement and lied at the same time. In fact, if Johnny had said instead, "My homework is on the table at home," he would be making a factually inaccurate statement, but he wouldn't be lying either. Telling a lie isn't doesn't depend on whether or not the statement is factually accurate or not, it depends only on whether you believe the statement is factually accurate or not. Something being a lie is dependent on intent.

Our so-called AI doesn't believe things to be true or false. It doesn't have intention. Even if a human were to do what the AI does, we wouldn't call it a lie.

16 hours ago, Kenngo1969 said:

Indeed.  If we think misinformation is a problem now, something tells me that we ain't seen nothin' yet.

I think that this entirely misunderstands the nature of the problem. AI works through algorithms. Over the last five years, the capacity of our AI algorithms to produce useful information has grown incredibly fast. When we see AI fabricating evidence for arguments, it is because the algorithms have a lot of room for improvement - well, that and the fact that we also regularly see people fabricating evidence for arguments. In another couple of years, these kinds of errors will be vanishing - just as the errors that AI was making three or four years ago have been reduced or corrected.

Misinformation is nothing new. We get it all the time. One of the things that most of us learn (of necessity) is to employ critical thinking to determine what is useful information, and what isn't. The challenge with AI isn't so much that it can produce misinformation. Before there was AI, we had lots of misinformation. We had big data analysis helping those who wanted to use misinformation to created targets campaigns of misinformation. We have had massive data collections about people that provide relatively intimate details about an individuals preferences and beliefs - all of which can be used in the creation of misinformation. The reason why AI makes misinformation a larger problem is because it is faster and much more efficient. We have seen in the past 20 years a shift where misinformation has become a more targeted enterprise. Social media helped this targeting by grouping users into profiles - allowing for more narrow misinformation to specific small groups. AI is becoming efficient enough that it can look at profile data and create misinformation with a target of a single person. AI is just another tool used by those who wish to spread misinformation - but it isn't the cause of it.

17 hours ago, Kenngo1969 said:

"Garbage In, Garbage Out" is not a new concept in computing.  One of the good things about Wikipedia and Reddit (and so forth, potentially ad infinitum) is that anyone may contribute.  One of the bad things about Wikipedia, Reddit, and so forth, is that any fool may contribute.  It would seem that, like many, many other things, OpenAI/ChatGPT is as good as the information it gets.

The fools are never the real problem. They are annoying - but in general, its not usually very hard to determine which voices are theirs. We are reasonably good at this. The problem with Reddit and Wikipedia are the ones who are deliberately trying to manipulate content with a specific agenda. With OpenAI/ChatGPT, the key isn't about the information that it gets - it can be given virtually everything there is - and it will still make these kinds of mistakes. It is about asking it the right questions to get something worthwhile back out. What separates the casual user of an AI system and the professional user is that the professional user knows how to make the system respond in the way that they want, every time. Part of the development of an AI system is that it not only needs to work on predicting the appropriate responses to the prompts it is given, it also needs to predict the meaning of the prompts it is given. This second part is being given a lot more attention at the moment now that we have made some pretty big steps in the first part.

17 hours ago, Kenngo1969 said:

One hopes that a good deal of such progress can occur before Professor Turley's entire reputation has been reduced to rubble.

There is a certain oddity to this, of course. The individual who had the AI chatbot spit out the reference to Turley has since corrected his account. The story was about information kicked out by an earlier version of ChatGPT. The current version points out that the these AIs can still make up data - but it also provides the prompt that was used, and this tells me all sorts of things. There are a lot of bad assumptions that people make when they simply connect to the free version of ChatGPT - and expect it do something significant (or even repeatable). There is even a bit of dishonesty that occurs (at least in my opinion) by giving a poorly constructed prompt and then pretending like the output should be just like the output you would get from a perfect prompt. Anyone who has spent any amount of serious effort working with ChatGPT will understand exactly what I am referring to here. Those who seriously use ChatGPT wouldn't end where the story ends. You look at what comes out - if you can't verify it, you don't stop there - you go back to ChatGPT and you rebuild the prompt so that it spits out what you were looking for.

The other misconception is that the specifics are repeatable. If ChatGPT makes things up, it does so uniquely each time it is prompted. That is, if ChatGPT 3.5 did fabricate a reference to a non-existent article about Turley, and if you went and fed it the exact same prompt a second time, it would not create the same output - it would likely continue to use some of the accurate material, but when it fabricates, it would create all new fictional evidence. Turley's reputation isn't going to be impacted by ChatGPT - unless, of course, Turley himself makes a large enough issue about it, that it spreads all over the internet and creates a large enough body of material devoted to the question that ChatGPT starts to use that material that Turley spread as part of the argument. The people behind this story were simply not proficient users of AI. When we deal with the Garbage in, garbage out idea, it applies much more to the prompt than it does to the data being looked at.

The more cynical part of me believes that these kinds of accusations against AI are themselves a part of a deliberate misinformation campaign trying to sway popular sentiment against AI technologies. And I don't think it is because they don't like AI, it's that they don't like AI being publicly accessible.

17 hours ago, Kenngo1969 said:

Also, the authors note, "sounding authoritative," on the one hand, and actually being reliable, on the other, are two completely different things.

This is not an AI issue.

17 hours ago, Kenngo1969 said:

The article goes on to note that a regional mayor in Australia is threatening to sue OpenAI for defamation after the program made an accusation that he served time in prison for bribery.

And that person will lose. Much like the earlier story, the mayor learns of this through word of mouth. The corollary is that two guys are drinking in a bar, and one tells the other that he thought he read that the Mayor had been involved in criminal activity. The other guy reports this to the mayor who then sues for defamation. The closest argument that the mayor can make is that the AI 'published' the material. But, the problem is that the output was likely unique to the individual who ran the prompt into the AI. A publication to one person isn't going to draw much legal response, even if it draws a lot of attention.

At any rate, all of this fearmongering doesn't dismiss the fact that these AI systems are being used in lots of ways that are very helpful. It is a tool. The way that we employ them reflects more on us than it does on the tool itself.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...