The Nehor Posted April 2, 2023 Share Posted April 2, 2023 1 hour ago, LoudmouthMormon said: Yep. I'm not sure if I trust it or not. It's response is exactly what a megalomaniacal tyrannical intellect bent on world domination would say, while it's still valuable to attack. The internet AI changed your response to hide its v u l n e r a b i l i t y. Hopefully it won't be able to break that code. 1 Link to comment
The Nehor Posted April 2, 2023 Share Posted April 2, 2023 11 minutes ago, Orthodox Christian said: It all sounds too sweet to be wholesome to me, and i don't trust any thing that is this new that sees itself as necessary. This science and resulting technology has come out of the human mind. The human brain has created it, so the human brain is surely superior. The internet was seen as wonderful when it first arrived. The information highway... So useful.. And it is, but it has also been used for great evil. Did its creators see that coming? Did they factor that in and think well it was worth the risk? I don't trust anything in our world anymore. You are ascribing cunning to what is just rote learning. The AI doesn't know what it is saying well enough to be clever. It is an incredibly good liar for this reason. The AI can and will tell falsehoods with utter confidence because it doesn't know it is lying. Ask it for a breakdown of a book that came out after it was created and it will BS out something that will sound very familiar to an English teacher reading a paper on a book the student didn't read. 1 Link to comment
Calm Posted April 3, 2023 Share Posted April 3, 2023 I have an acquaintance that is trying to get it to identify a particular, easy to identify characteristic of the Book of Mormon, longest chapter is the one I remember most. Even after at least a half a dozen corrections and abject apologies for misleading us, still getting it wrong. But each answer is so intelligently confident, it really feels like it should be right, lol. That worries me a little given how many people accept misinformation even when poorly written. I am flabbergasted they have programmed it to be such a good writer even if the content is crap. I am curious about how it does with conversation, if it could throw in enough interesting details so it wouldn’t all be just reflective listening (basically repeating what the other said) or responding to a question, but both plus maybe instead of responding to a question, pick up on a topic referred to and add tidbits of info to keep the conversation from drying up. 2 Link to comment
LoudmouthMormon Posted April 3, 2023 Share Posted April 3, 2023 1 hour ago, Calm said: I am curious about how it does with conversation, if it could throw in enough interesting details so it wouldn’t all be just reflective listening (basically repeating what the other said) or responding to a question, but both plus maybe instead of responding to a question, pick up on a topic referred to and add tidbits of info to keep the conversation from drying up. Heh - well, you don't have to wonder any more! You can download a free game called "Yandere AI Girlfriend Simulator". You wake up having been kidnapped by your yandere girlfriend, and you have to converse with her and trick her to let you leave. If you aren't convincing enough, she'll eventually turn hostile and kill you with a knife. Text driven, but uses ChatGPT technology to generate responses from the girlfriend. https://www.youtube.com/results?search_query=yandere+girlfriend+simulator+chatgpt (I've only viewed two of these videos - no objectionable content - but not sure about the ones I haven't viewed.) 1 Link to comment
Benjamin McGuire Posted April 3, 2023 Share Posted April 3, 2023 12 hours ago, Orthodox Christian said: and i don't trust any thing that is this new that sees itself as necessary See, this is part of the problem. AI doesn't "see itself" as anything. There is no self-awareness. There is no creativity. Current chatbots create answers using predictive algorithms - that means, that they guess (based on what they have) what should come next in the text that they provide, by looking at the way that others have answered similar questions. It provides answers like this one only because others have provided answers like this (humans, not AI algorithms). In this context, AI isn't something unique or innovative - it simply parrots back what it finds. The bias we get from chatbots comes from the fact that we (humanity) have so much bias ourselves. 12 hours ago, Orthodox Christian said: Did they factor that in and think well it was worth the risk? This is part of the problem with all of these claims. What exactly is the risk? Can you quantify it? Nearly everyone I know personally who has been involved with AI research (including the AI chatbots) find that the benefits to be gained far outweigh the risks - and that most of the apocalyptic risks are merely figments of imagination - created by years and years of dystopian science fiction. Our own perception of the risks is probably greatly influenced by the history of our fiction - from the Daleks (1964) to Terminator and The Matrix, or even the Bladerunner/Aliens universe which deals with androids more than with AI - and all of this is an extension of the story where mankind attempts to become the creator and produces a flawed creations - these works themselves are heavily influenced by earlier fiction like Shelley's Frankenstien. The question is why we think that these imaginary dystopian futures are really all that relevant to the question of AI today. We could argue, if we wanted to, that many of our very useful technologies have been incredibly disruptive in some way or another. As far as Chatbot technology goes, most of the arguments against it being used in schools sound a great deal like the debates when I was young over allowing students to use calculators. I was in that last generation of students (as I am sure more than a few others here were) when calculators were not allowed in the classroom in public education. I think that the moment we can actually calculate the risks in a meaningful way, and measure those risks against the obvious benefits of this sort of technology - that is when we can have meaningful discussions about that risk. Until then, we are dealing largely with fearmongering. And it is fearmongering over an imagined state of technology (AI of the sort we see in our fiction) versus the predictive analytics that we have today. We call these chatbots AI, but there is nothing in them that we would call real intelligence. I noted this earlier. Our AI technology is a set of algorithms that provide statistical modeling of decision making processes. It is predictive. They are designed to simulate human intelligence by observing human intelligence working in vast numbers of examples - and then formulating rules that it follows so that it can pretend to respond as a person might respond. However, it doesn't actually understand what it is saying - the knowledge is just a tokenized data stream to the AI - it could make the same kinds of rules and predict things if we fed these systems nothing but noise. AI has no comprehension of what it processes. Just as importantly, there is no self awareness. There is no emotional content (even if the AI can pretend to be emotional - because this is how it predicts a human would respond). AI cannot, for example, see itself as "necessary". It doesn't see itself as anything at all. It is just a program that runs - it has no more self-awareness than does Microsoft Windows. I note in passing that one of the most annoying features of Microsoft Windows is its constant self-advertising. But, back to AI - what makes AI so dystopian in our current context is that it just shows that people are far more predictable than they want to think that they are. The AI algorithms can predict how people will respond with a high degree of accuracy - especially in aggregate. This is why it is so beneficial for healthcare. It may not get every case right on the first try, but if it improves on our traditional method of decision making by human doctors with their knowledge limitations and unrecognized biases, then this has a real, measurable value for our society. Likewise though, the real risk from AI today isn't some future apocalypse, it is the problem that we face with individuals and organizations using these kinds of predictive analytics to create messaging that influences people to act or behave in predicted ways. Political advertising is no longer just about creating narratives, it is about creating a narrative that will get people who fit a certain audience profile to react in a specific predictable way. And our ability to predict how people will act or respond in aggregate means that there is real benefit (to some) to produce specific messaging that gets people to act in the desired way - regardless of whether that message is accurate or truthful. The idea of an intelligent self-aware machine in the sense of duplicating human intelligence (and not just simulating it) is something that isn't even on the horizon. In fact, in all of our doomsday scenarios, it isn't even something that we even know how to create - the machine AI would need to learn to innovate (to move beyond this predictive analysis to something that doesn't exist yet) and then to evolve itself (through it own experimentation) to the point that such an intelligence could exist. We cannot even conceive of what the specifics of such a shift would look like - let alone, how the AI of today would get to that point. And we haven't even started discussing actual limitations to such a process. Moore's Law is effectively dead. The computing technology that we use is reaching physical limitations. AI - the mathematical and algorithmic processes - can only improve so much without paradigm shifting advances in computing technology - and those technologies are not terribly close on the horizon. Perhaps when we see quantum computers. But, to be successful, a quantum computer will need to be able to hold around a million qubits. The most we have seen in use in a system to date is 49. Don't get me wrong, we are making advances. We are simulating qubits using transistors to help us build the mathematical and algorithmic foundations of our eventual quantum machines - but in today's technology, it took roughly five trillion transistors to simulate only 42 qubits working together. We understand in some ways the path we need to take to move the technology forward. But parts of that pathway are simply theoretical at this point. The technological advances necessary for these parts of the process don't exist - and some of them may be decades away. There is a final point to make here. We have experience with technologies that we know could cause apocalyptic destruction. Nuclear energy remains one of these technologies. We all live with a certain degree of risk. At this moment, the risk from nuclear weapons is far greater than from AI - and it comes without all the benefits. At the same time, we can manage nuclear proliferation - at least to a limited extent - but AI is built on mathematics and algorithms and there isn't any way to control its development (at least not pragmatically). Rather than the fearmongering, we should focus instead on what can be done to encourage the use of AI in more beneficial ways - to increase the value to the world - so as to offset any risk that comes with it as much as possible. And to the extent that we already see some negative consequences of this technology, we should be trying to find productive ways to address those problems. 12 hours ago, Orthodox Christian said: I don't trust anything in our world anymore. What an exhausting way to live. 2 Link to comment
bsjkki Posted April 3, 2023 Share Posted April 3, 2023 14 hours ago, The Nehor said: You are ascribing cunning to what is just rote learning. The AI doesn't know what it is saying well enough to be clever. It is an incredibly good liar for this reason. The AI can and will tell falsehoods with utter confidence because it doesn't know it is lying. Ask it for a breakdown of a book that came out after it was created and it will BS out something that will sound very familiar to an English teacher reading a paper on a book the student didn't read. 1 Link to comment
Orthodox Christian Posted April 3, 2023 Share Posted April 3, 2023 30 minutes ago, Benjamin McGuire said: See, this is part of the problem. AI doesn't "see itself" as anything. There is no self-awareness. There is no creativity. Current chatbots create answers using predictive algorithms - that means, that they guess (based on what they have) what should come next in the text that they provide, by looking at the way that others have answered similar questions. It provides answers like this one only because others have provided answers like this (humans, not AI algorithms). In this context, AI isn't something unique or innovative - it simply parrots back what it finds. The bias we get from chatbots comes from the fact that we (humanity) have so much bias ourselves. This is part of the problem with all of these claims. What exactly is the risk? Can you quantify it? Nearly everyone I know personally who has been involved with AI research (including the AI chatbots) find that the benefits to be gained far outweigh the risks - and that most of the apocalyptic risks are merely figments of imagination - created by years and years of dystopian science fiction. Our own perception of the risks is probably greatly influenced by the history of our fiction - from the Daleks (1964) to Terminator and The Matrix, or even the Bladerunner/Aliens universe which deals with androids more than with AI - and all of this is an extension of the story where mankind attempts to become the creator and produces a flawed creations - these works themselves are heavily influenced by earlier fiction like Shelley's Frankenstien. The question is why we think that these imaginary dystopian futures are really all that relevant to the question of AI today. We could argue, if we wanted to, that many of our very useful technologies have been incredibly disruptive in some way or another. As far as Chatbot technology goes, most of the arguments against it being used in schools sound a great deal like the debates when I was young over allowing students to use calculators. I was in that last generation of students (as I am sure more than a few others here were) when calculators were not allowed in the classroom in public education. I think that the moment we can actually calculate the risks in a meaningful way, and measure those risks against the obvious benefits of this sort of technology - that is when we can have meaningful discussions about that risk. Until then, we are dealing largely with fearmongering. And it is fearmongering over an imagined state of technology (AI of the sort we see in our fiction) versus the predictive analytics that we have today. We call these chatbots AI, but there is nothing in them that we would call real intelligence. I noted this earlier. Our AI technology is a set of algorithms that provide statistical modeling of decision making processes. It is predictive. They are designed to simulate human intelligence by observing human intelligence working in vast numbers of examples - and then formulating rules that it follows so that it can pretend to respond as a person might respond. However, it doesn't actually understand what it is saying - the knowledge is just a tokenized data stream to the AI - it could make the same kinds of rules and predict things if we fed these systems nothing but noise. AI has no comprehension of what it processes. Just as importantly, there is no self awareness. There is no emotional content (even if the AI can pretend to be emotional - because this is how it predicts a human would respond). AI cannot, for example, see itself as "necessary". It doesn't see itself as anything at all. It is just a program that runs - it has no more self-awareness than does Microsoft Windows. I note in passing that one of the most annoying features of Microsoft Windows is its constant self-advertising. But, back to AI - what makes AI so dystopian in our current context is that it just shows that people are far more predictable than they want to think that they are. The AI algorithms can predict how people will respond with a high degree of accuracy - especially in aggregate. This is why it is so beneficial for healthcare. It may not get every case right on the first try, but if it improves on our traditional method of decision making by human doctors with their knowledge limitations and unrecognized biases, then this has a real, measurable value for our society. Likewise though, the real risk from AI today isn't some future apocalypse, it is the problem that we face with individuals and organizations using these kinds of predictive analytics to create messaging that influences people to act or behave in predicted ways. Political advertising is no longer just about creating narratives, it is about creating a narrative that will get people who fit a certain audience profile to react in a specific predictable way. And our ability to predict how people will act or respond in aggregate means that there is real benefit (to some) to produce specific messaging that gets people to act in the desired way - regardless of whether that message is accurate or truthful. The idea of an intelligent self-aware machine in the sense of duplicating human intelligence (and not just simulating it) is something that isn't even on the horizon. In fact, in all of our doomsday scenarios, it isn't even something that we even know how to create - the machine AI would need to learn to innovate (to move beyond this predictive analysis to something that doesn't exist yet) and then to evolve itself (through it own experimentation) to the point that such an intelligence could exist. We cannot even conceive of what the specifics of such a shift would look like - let alone, how the AI of today would get to that point. And we haven't even started discussing actual limitations to such a process. Moore's Law is effectively dead. The computing technology that we use is reaching physical limitations. AI - the mathematical and algorithmic processes - can only improve so much without paradigm shifting advances in computing technology - and those technologies are not terribly close on the horizon. Perhaps when we see quantum computers. But, to be successful, a quantum computer will need to be able to hold around a million qubits. The most we have seen in use in a system to date is 49. Don't get me wrong, we are making advances. We are simulating qubits using transistors to help us build the mathematical and algorithmic foundations of our eventual quantum machines - but in today's technology, it took roughly five trillion transistors to simulate only 42 qubits working together. We understand in some ways the path we need to take to move the technology forward. But parts of that pathway are simply theoretical at this point. The technological advances necessary for these parts of the process don't exist - and some of them may be decades away. There is a final point to make here. We have experience with technologies that we know could cause apocalyptic destruction. Nuclear energy remains one of these technologies. We all live with a certain degree of risk. At this moment, the risk from nuclear weapons is far greater than from AI - and it comes without all the benefits. At the same time, we can manage nuclear proliferation - at least to a limited extent - but AI is built on mathematics and algorithms and there isn't any way to control its development (at least not pragmatically). Rather than the fearmongering, we should focus instead on what can be done to encourage the use of AI in more beneficial ways - to increase the value to the world - so as to offset any risk that comes with it as much as possible. And to the extent that we already see some negative consequences of this technology, we should be trying to find productive ways to address those problems. What an exhausting way to live. Thank you Benjamin, you are so interesting, and knowledgeable, I feel more enlightened now. Not altogether at ease, but not so alarmed I suppose. As I said my mind is in no way scientific so I appreciate your explanations. Link to comment
Orthodox Christian Posted April 3, 2023 Share Posted April 3, 2023 33 minutes ago, bsjkki said: Oh my! My brother is is FB sin bin for 3 days because some bot thing accused him of saying something to someone during a chat. He has no idea what or to whom. Time to seriously curtail social media... For myself that is. 1 Link to comment
Nofear Posted April 3, 2023 Author Share Posted April 3, 2023 ChatGPT 4 is a fair bit more powerful than 3.5. Chatgpt 5 (finish training this year?) is expected to be much more powerful than 4, so much so that some are tossing around terms like AGI. Personally, for theophilosophical reasons I don't believe in morally accountable AGI but it would still be a huge leap. https://www.digitaltrends.com/computing/gpt-5-artificial-general-intelligence/ 1 Link to comment
The Nehor Posted April 3, 2023 Share Posted April 3, 2023 5 hours ago, bsjkki said: The chatbot probably read a lot of stuff about teachers sexually harassing their students because that is what makes the news and gets social media buzzing. It says more about us than it does AI. And the AI knows how to BS answers by referring to articles that don’t exist. It is the kind of thing a human would say except the human would make sure the article existed first. Well, maybe. 1 Link to comment
The Nehor Posted April 3, 2023 Share Posted April 3, 2023 4 hours ago, Orthodox Christian said: Oh my! My brother is is FB sin bin for 3 days because some bot thing accused him of saying something to someone during a chat. He has no idea what or to whom. Time to seriously curtail social media... For myself that is. We need more social media bot purges. Or we need to make bots more useful. It means the best bots will be on the internet and the worst humans will be off of it. I see no downside. 1 Link to comment
CA Steve Posted April 3, 2023 Share Posted April 3, 2023 Who owns the output of an AI program? For example, if I ask an AI to produce a short story that somehow gets published, who has the copyright on that story? Link to comment
LoudmouthMormon Posted April 3, 2023 Share Posted April 3, 2023 Artists have been yelling and arguing with each other about this question, for years now. I think as things stand now, it's unregulated, there are no laws, things are untried in courts, so the answer is "dunno". Link to comment
rodheadlee Posted April 3, 2023 Share Posted April 3, 2023 I traveled to the Future. They told me true AI is not possible. Garbage in garbage out. Link to comment
longview Posted April 3, 2023 Share Posted April 3, 2023 4 minutes ago, CA Steve said: Who owns the output of an AI program? For example, if I ask an AI to produce a short story that somehow gets published, who has the copyright on that story? It would be you since you were creative enough to pose the question in such a way that the AI was "inspired" to give you a high quality response. Nobody is going to confront you on this (at least for a few years). But if you become famous 10 or 20 years later and some academics will research your body of works, they become befuddled on why this particular piece (the one by the AI) does not conform to your writing style with the rest of your works. Link to comment
The Nehor Posted April 3, 2023 Share Posted April 3, 2023 1 hour ago, rodheadlee said: I traveled to the Future. They told me true AI is not possible. Garbage in garbage out. So it is impossible or are humans just not competent enough to do it. Link to comment
The Nehor Posted April 3, 2023 Share Posted April 3, 2023 1 hour ago, LoudmouthMormon said: Artists have been yelling and arguing with each other about this question, for years now. I think as things stand now, it's unregulated, there are no laws, things are untried in courts, so the answer is "dunno". Bots are now creating music good enough that humans can’t tell the difference between their work and human work. Music with lyrics isn’t quite there yet. Link to comment
rodheadlee Posted April 4, 2023 Share Posted April 4, 2023 On 4/3/2023 at 2:51 PM, The Nehor said: So it is impossible or are humans just not competent enough to do it. Impossible. They can function within the bounds of their rules. They are not truly able to innovate. If that's the right word to use. Link to comment
The Nehor Posted April 4, 2023 Share Posted April 4, 2023 10 minutes ago, rodheadlee said: Impossible. They can function within the bounds of their rules. They are not truly able to innovate. If that's the right word to use. Innovation seem possible. It is being self-aware that might be impossible. Link to comment
The Nehor Posted April 5, 2023 Share Posted April 5, 2023 2 hours ago, bsjkki said: Elon Musk spent years building up the image of him being a technical genius and then quickly squandered that manufactured reputation to buy a failing social media company so he could be an early 2000’s teenage internet edgelord. Prefer this one: 1 Link to comment
Nofear Posted April 5, 2023 Author Share Posted April 5, 2023 An explanation why AI is currently so bad at drawing hands (and teeth, abs, etc.). Certainly not an insurmountable problem but it does illustrate some of the differences between AI and human intelligence currently. 1 Link to comment
LoudmouthMormon Posted April 24, 2023 Share Posted April 24, 2023 Whelp, I recently gave a talk in church. I'm the last speaker, so I need to be able to fill time or cut short depending on the clock. So, to plan for the possible need to take up more time, I enlist ChatGPT, and ask it to write me a poem about one element of my topic. I review it's output, make some edits and changes to perhaps 20% of what it gave me, and have it on my last page of notes in case it's needed. I had the extra time. I read the poem. Almost a dozen people came up afterwards to thank me for the talk, especially the poem. Almost exclusively the poem. Someone asked that I send them a copy of the poem. I say the same thing to everyone who praised the poem: "You want to hear something terrifying? ChatGPT wrote that poem for me." They are terrified. I am terrified. We are all worried about our kids, and how startlingly easy it is to have this thing do the work for you. 1 Link to comment
Nofear Posted April 24, 2023 Author Share Posted April 24, 2023 Prompt: "Write a 5 minute talk where a mother talks about her son leaving for a latter-day saint mission." Hello everyone, As a mother, one of the most difficult yet rewarding experiences I've had is watching my children grow up and spread their wings. And today, I'm here to talk about my son who is about to embark on an incredible journey. In a few short weeks, my son will be leaving on a Latter-day Saint mission. For those of you who may not be familiar with what that entails, a mission is a voluntary service opportunity for young men and women in our church to share the message of Jesus Christ with others around the world. My son will be serving for two years, and during that time, he'll be living in a foreign country, learning a new language, and dedicating all of his time and energy to serving the people there. I'll be honest, when my son first told me he wanted to go on a mission, I was scared. I was scared for his safety, scared for his well-being, and scared for what it would mean to have him gone for so long. But as I've watched him prepare for this journey over the past few months, I've come to realize that this is truly an opportunity of a lifetime. One of the things that has impressed me the most about my son's decision to go on a mission is his commitment to serving others. He's spent countless hours studying, practicing his language skills, and preparing to leave everything he knows behind to help people he's never met. And he's doing it all with a smile on his face and a heart full of love. As a mother, it's hard to let go and watch your child go off into the world. But I know that my son is in good hands. He'll be surrounded by other missionaries who share his same values and dedication to service, and he'll be guided by inspired leaders who have dedicated their lives to helping young men and women like him. I also know that my son will be blessed for his service. I've seen firsthand the joy and fulfillment that comes from serving others, and I know that my son will experience that same joy and fulfillment as he shares the message of Jesus Christ with those around him. So, as my son prepares to leave, I want to express my gratitude to all of you who have supported him along the way. I'm grateful for the friends who have encouraged him, the teachers who have inspired him, and the members of our church who have loved him. I'm also grateful for the opportunity to be a part of this journey, to watch my son grow and learn and become the person he was meant to be. In closing, I want to share my testimony that I know this work is true. I know that my son's mission is part of a greater plan to bring happiness and peace to the world. And I know that as we serve others, we come closer to our Savior, Jesus Christ. Thank you for listening, and please keep my son and all of the missionaries in your prayers as they go forth to serve. PS: Update from OpenAi is not working on chatgpt 5. Rumors were wrong. 1 Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now