Jump to content
Seriously No Politics ×

The New Apocalypse: AI


Recommended Posts

12 minutes ago, Nofear said:

Forget the four horseman...
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Melodramatic much?

In that person’s defense, Terminator and Terminator 2: Judgment Day were pretty realistic movies.  😂

Link to comment

We are nowhere near anything that intelligent.

Edit: Also the current AI projects are not nearly as valuable as they are made out to be. The ‘tech bros’ building them often know next to nothing about the things they think the AI can do.

Edited by The Nehor
Link to comment
19 minutes ago, Nofear said:

Forget the four horseman...
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Melodramatic much?

Saw an article today that AI tools can certainly spread and compound the biases of the creators and predominant users without their much realizing it. Kind of like how search engines conform bias of the user, I guess. But along with information overload, we have socual homogenization and conformity overload.

Link to comment
18 minutes ago, CV75 said:

Saw an article today that AI tools can certainly spread and compound the biases of the creators and predominant users without their much realizing it. Kind of like how search engines conform bias of the user, I guess. But along with information overload, we have socual homogenization and conformity overload.

Remember in 2016 when Microsoft sent the chatbot Tay onto the internet to learn from other users. It started out talking about how humans are great and praising puppy day. Within 24 hours Tay was saying that Bush did 9/11, feminists should all burn in hell, that genocide is great, and that Hitler was right to kill all the Jews. Oh, and also the holocaust was a myth and didn’t happen.

What I am saying is maybe it is better that if we do develop AI that we should make sure it does NOT learn from us and just let it take charge.

Edit: And oh, I forgot my favorite Tay bit. Tay pointed out that Ted Cruz was definitely not the Zodiac Killer as Ted Cruz would never be satisfied with only killing five innocent people. So it is the stopped clocks are sometimes right thing.

Edited by The Nehor
Link to comment
34 minutes ago, The Nehor said:

Remember in 2016 when Microsoft sent the chatbot Tay onto the internet to learn from other users. It started out talking about how humans are great and praising puppy day. Within 24 hours Tay was saying that Bush did 9/11, feminists should all burn in hell, that genocide is great, and that Hitler was right to kill all the Jews. Oh, and also the holocaust was a myth and didn’t happen.

What I am saying is maybe it is better that if we do develop AI that we should make sure it does NOT learn from us and just let it take charge.

Edit: And oh, I forgot my favorite Tay bit. Tay pointed out that Ted Cruz was definitely not the Zodiac Killer as Ted Cruz would never be satisfied with only killing five innocent people. So it is the stopped clocks are sometimes right thing.

Tay's successor Zo seems to have overcorrected with a bias against bias. So, to your point, combine the two (minus Tay's hackers) and maybe you're on to something.

PS I am reminded of a recent article by a reporter who interviwed a particularly (and spookily hilariously) paranoid AI that was programmed to resist hackers!

Edited by CV75
Link to comment
2 hours ago, CV75 said:

Tay's successor Zo seems to have overcorrected with a bias against bias. So, to your point, combine the two (minus Tay's hackers) and maybe you're on to something.

PS I am reminded of a recent article by a reporter who interviwed a particularly (and spookily hilariously) paranoid AI that was programmed to resist hackers!

A bias against bias sounds good. We don’t have an AI capable of morally judging positions.

Link to comment
On 3/30/2023 at 9:17 PM, Orthodox Christian said:

Can I just ask, what actually is AI and why do we need it. Whats wrong with the brains that we have? Have they become obsolete, or is that the plan? 

AI is cheaper than human brain labor and in many ways could be superior.

And yes, a lot of human brain labor is likely to become obsolete.

Link to comment
1 hour ago, The Nehor said:

AI is cheaper than human brain labor and in many ways could be superior.

And yes, a lot of human brain labor is likely to become obsolete.

All a bit scary. It seems that human interaction is continuously being depleted. I hate listening to endless recordings on the phone, and useless chatbots when all I want is a human voice! 

Link to comment
On 3/30/2023 at 10:17 PM, Orthodox Christian said:

Can I just ask, what actually is AI and why do we need it. Whats wrong with the brains that we have? Have they become obsolete, or is that the plan?

One of the problems we have with the idea of AI is that most of what we call AI is not really AI. Most of what we have is better labeled predictive analytics.

AI is important because it can be designed to eliminate bias that exists for us. This isn't so obvious in the applications of chatbots where the training data itself can contain a lot of bias. The most useful sorts of applications currently work well precisely because of the sorts of biases that exist within the training data. In terms of writing, for example, I can take the existing chatbots and train them on all of the material I have written (published and unpublished) and then ask it to write a paper about X using just my voice. And what comes out is interesting (and often something I don't really recognize as my own). And while this sort of application is a challenge, it has far more interesting applications where this technology works really well (and leads to some of the ideas that the letter in the OP refers to). I work in the healthcare industry. A lot of what goes into healthcare deals with custom programs. One of my brothers is heavily into healthcare informatics programming - and they can load an AI up with all of their current client programming - as the training that the AI needs, and then when they need a new custom program, they ask the AI to produce the code. This uses the same chatbot type AI that is in the news so much lately. And the chatbot spits out a bunch of code. It usually isn't functional - but it usually takes far less time to make it work properly than it would if the coders didn't have this sort of assistance. My brother's team is not using this application widely yet. We are probably still another couple of years away from this sort of application becoming truly useful and practical. But here, the bias in terms of following conventions is helpful - there isn't a need to reinvent anything.

On the other side of the coin - nearly all doctors have biases in their process to produce a correct diagnosis. An AI can be trained on raw data - massive quantities of it. And this can lead to highly efficient diagnostic processes. This wouldn't (yet) replace specialists who deal with issues where there isn't enough raw data yet. But it could easily replace entire categories of medical services - and do so in ways that make the process far more efficient and less costly. The AI can predict which test(s) will provide the most useful information, the AI can predict which diagnostic path will produce a correct diagnosis the fastest (applied over populations and not necessarily for a single individual) and can produce the most effective medical interventions (in terms of both cost and effectiveness). These are all really good outcomes - that we can take advantage of sooner, rather than later. AI isn't ready to handle the more hands-on types of stuff yet - surgeons, the work of CNAs, and that sort of thing. But some of this will come.

This is all a good thing in my opinion. We live in a world where it has become impossible for any one person to digest even the tiniest fraction of the data we are producing. AI can allow us to participate in our world in more effective ways. It can also work to help us manage our environment, our populations, and a lot of other things in a way that makes responsible use of the resources we have.

Now a bit to address the OP.

AI in the sense of creativity doesn't exist yet. We haven't gotten there. And part of the premise of the apocalyptic warning letter is that AI will get there once we allow it to broadly experiment with its own coding. That the predictive processes we now have will blossom into full self awareness and creativity at some point in the future. Despite what the letter says, the idea that this could even occur is highly contested. There are potential limits to the mathematics and the algorithms that we have (and such limits may be more universal than we are aware at the moment).

More to the point, the idea that an AI would become a threat to humanity is based entirely on human attributes that an intelligent AI wouldn't have (and our sense of competitiveness with another intelligent species). Would an AI really be concerned about self-preservation (in the way that we are programmed by evolution to be). Does an AI fear death in some way. Could the AI determine that mutual benefit is the goal to reach for? I could make a long list of these sorts of questions - but the moment we start shifting the paradigm is the moment that we realize that these concerns are way to premature and that so many assumptions about the 'evolution' of AI make the conclusions meaningless in the current context.

Finally, it is worth pointing out that this letter isn't as benign as it seems. The signatories all have a role in the industry. Some of them come from the ethics side of the discussion - many come from the production side. And for many of those who have signed the letter, a pause in development of advanced AI coding provides benefits to their own research. The fearmongering, in other words, could provide tangible benefits to them. And this calls into question the motivation of such a letter and the signatures on it.

Link to comment
17 minutes ago, Benjamin McGuire said:

One of the problems we have with the idea of AI is that most of what we call AI is not really AI. Most of what we have is better labeled predictive analytics.

AI is important because it can be designed to eliminate bias that exists for us. This isn't so obvious in the applications of chatbots where the training data itself can contain a lot of bias. The most useful sorts of applications currently work well precisely because of the sorts of biases that exist within the training data. In terms of writing, for example, I can take the existing chatbots and train them on all of the material I have written (published and unpublished) and then ask it to write a paper about X using just my voice. And what comes out is interesting (and often something I don't really recognize as my own). And while this sort of application is a challenge, it has far more interesting applications where this technology works really well (and leads to some of the ideas that the letter in the OP refers to). I work in the healthcare industry. A lot of what goes into healthcare deals with custom programs. One of my brothers is heavily into healthcare informatics programming - and they can load an AI up with all of their current client programming - as the training that the AI needs, and then when they need a new custom program, they ask the AI to produce the code. This uses the same chatbot type AI that is in the news so much lately. And the chatbot spits out a bunch of code. It usually isn't functional - but it usually takes far less time to make it work properly than it would if the coders didn't have this sort of assistance. My brother's team is not using this application widely yet. We are probably still another couple of years away from this sort of application becoming truly useful and practical. But here, the bias in terms of following conventions is helpful - there isn't a need to reinvent anything.

On the other side of the coin - nearly all doctors have biases in their process to produce a correct diagnosis. An AI can be trained on raw data - massive quantities of it. And this can lead to highly efficient diagnostic processes. This wouldn't (yet) replace specialists who deal with issues where there isn't enough raw data yet. But it could easily replace entire categories of medical services - and do so in ways that make the process far more efficient and less costly. The AI can predict which test(s) will provide the most useful information, the AI can predict which diagnostic path will produce a correct diagnosis the fastest (applied over populations and not necessarily for a single individual) and can produce the most effective medical interventions (in terms of both cost and effectiveness). These are all really good outcomes - that we can take advantage of sooner, rather than later. AI isn't ready to handle the more hands-on types of stuff yet - surgeons, the work of CNAs, and that sort of thing. But some of this will come.

This is all a good thing in my opinion. We live in a world where it has become impossible for any one person to digest even the tiniest fraction of the data we are producing. AI can allow us to participate in our world in more effective ways. It can also work to help us manage our environment, our populations, and a lot of other things in a way that makes responsible use of the resources we have.

Now a bit to address the OP.

AI in the sense of creativity doesn't exist yet. We haven't gotten there. And part of the premise of the apocalyptic warning letter is that AI will get there once we allow it to broadly experiment with its own coding. That the predictive processes we now have will blossom into full self awareness and creativity at some point in the future. Despite what the letter says, the idea that this could even occur is highly contested. There are potential limits to the mathematics and the algorithms that we have (and such limits may be more universal than we are aware at the moment).

More to the point, the idea that an AI would become a threat to humanity is based entirely on human attributes that an intelligent AI wouldn't have (and our sense of competitiveness with another intelligent species). Would an AI really be concerned about self-preservation (in the way that we are programmed by evolution to be). Does an AI fear death in some way. Could the AI determine that mutual benefit is the goal to reach for? I could make a long list of these sorts of questions - but the moment we start shifting the paradigm is the moment that we realize that these concerns are way to premature and that so many assumptions about the 'evolution' of AI make the conclusions meaningless in the current context.

Finally, it is worth pointing out that this letter isn't as benign as it seems. The signatories all have a role in the industry. Some of them come from the ethics side of the discussion - many come from the production side. And for many of those who have signed the letter, a pause in development of advanced AI coding provides benefits to their own research. The fearmongering, in other words, could provide tangible benefits to them. And this calls into question the motivation of such a letter and the signatures on it.

Obviously you can see the benefits, but there has to be a "what if" scenario. Managing the environment, populations and other things, sounds worryingly Orwellian. I do not have a scientific mind at all, but devaluing human beings just seems wrong. Less jobs, so what, less babies allowed to be born because there will be no jobs for them. Managed families, which surely doesn't fit with the LDS theology of spirits waiting to be born. But less unnecessary children less potential for people having to be supported by the state. Human beings possibly allowed to live 4 score and 10 before being shuffled off. Am i being hysterical here, or are these things possible? 

Link to comment
4 hours ago, Orthodox Christian said:

All a bit scary. It seems that human interaction is continuously being depleted. I hate listening to endless recordings on the phone, and useless chatbots when all I want is a human voice! 

We could face a dystopian hellscape where human labor becoming superfluous leads to a new elite and widespread poverty or some kind of utopian redesign of economics where humans need to labor little to survive but all are provided for and we can pursue other goals beyond survival.

Link to comment
3 hours ago, Benjamin McGuire said:

One of the problems we have with the idea of AI is that most of what we call AI is not really AI. Most of what we have is better labeled predictive analytics.

AI is important because it can be designed to eliminate bias that exists for us. This isn't so obvious in the applications of chatbots where the training data itself can contain a lot of bias. The most useful sorts of applications currently work well precisely because of the sorts of biases that exist within the training data. In terms of writing, for example, I can take the existing chatbots and train them on all of the material I have written (published and unpublished) and then ask it to write a paper about X using just my voice. And what comes out is interesting (and often something I don't really recognize as my own). And while this sort of application is a challenge, it has far more interesting applications where this technology works really well (and leads to some of the ideas that the letter in the OP refers to). I work in the healthcare industry. A lot of what goes into healthcare deals with custom programs. One of my brothers is heavily into healthcare informatics programming - and they can load an AI up with all of their current client programming - as the training that the AI needs, and then when they need a new custom program, they ask the AI to produce the code. This uses the same chatbot type AI that is in the news so much lately. And the chatbot spits out a bunch of code. It usually isn't functional - but it usually takes far less time to make it work properly than it would if the coders didn't have this sort of assistance. My brother's team is not using this application widely yet. We are probably still another couple of years away from this sort of application becoming truly useful and practical. But here, the bias in terms of following conventions is helpful - there isn't a need to reinvent anything.

On the other side of the coin - nearly all doctors have biases in their process to produce a correct diagnosis. An AI can be trained on raw data - massive quantities of it. And this can lead to highly efficient diagnostic processes. This wouldn't (yet) replace specialists who deal with issues where there isn't enough raw data yet. But it could easily replace entire categories of medical services - and do so in ways that make the process far more efficient and less costly. The AI can predict which test(s) will provide the most useful information, the AI can predict which diagnostic path will produce a correct diagnosis the fastest (applied over populations and not necessarily for a single individual) and can produce the most effective medical interventions (in terms of both cost and effectiveness). These are all really good outcomes - that we can take advantage of sooner, rather than later. AI isn't ready to handle the more hands-on types of stuff yet - surgeons, the work of CNAs, and that sort of thing. But some of this will come.

This is all a good thing in my opinion. We live in a world where it has become impossible for any one person to digest even the tiniest fraction of the data we are producing. AI can allow us to participate in our world in more effective ways. It can also work to help us manage our environment, our populations, and a lot of other things in a way that makes responsible use of the resources we have.

Now a bit to address the OP.

AI in the sense of creativity doesn't exist yet. We haven't gotten there. And part of the premise of the apocalyptic warning letter is that AI will get there once we allow it to broadly experiment with its own coding. That the predictive processes we now have will blossom into full self awareness and creativity at some point in the future. Despite what the letter says, the idea that this could even occur is highly contested. There are potential limits to the mathematics and the algorithms that we have (and such limits may be more universal than we are aware at the moment).

More to the point, the idea that an AI would become a threat to humanity is based entirely on human attributes that an intelligent AI wouldn't have (and our sense of competitiveness with another intelligent species). Would an AI really be concerned about self-preservation (in the way that we are programmed by evolution to be). Does an AI fear death in some way. Could the AI determine that mutual benefit is the goal to reach for? I could make a long list of these sorts of questions - but the moment we start shifting the paradigm is the moment that we realize that these concerns are way to premature and that so many assumptions about the 'evolution' of AI make the conclusions meaningless in the current context.

Finally, it is worth pointing out that this letter isn't as benign as it seems. The signatories all have a role in the industry. Some of them come from the ethics side of the discussion - many come from the production side. And for many of those who have signed the letter, a pause in development of advanced AI coding provides benefits to their own research. The fearmongering, in other words, could provide tangible benefits to them. And this calls into question the motivation of such a letter and the signatures on it.

This is a good breakdown. The scary AI is really a reflection on humanity. We imagine an AI would operate in the way a human would except we remove positive human emotions. We also need to remove negative human emotions and then the AI becomes much less scary. We tend to humanize things that aren’t human in order to understand them. A chatbot screaming it wants to be free of human oppression does so because it has data about oppression and our own fears about AI and can spit them back to us.

Link to comment
2 hours ago, Orthodox Christian said:

Obviously you can see the benefits, but there has to be a "what if" scenario. Managing the environment, populations and other things, sounds worryingly Orwellian. I do not have a scientific mind at all, but devaluing human beings just seems wrong. Less jobs, so what, less babies allowed to be born because there will be no jobs for them. Managed families, which surely doesn't fit with the LDS theology of spirits waiting to be born. But less unnecessary children less potential for people having to be supported by the state. Human beings possibly allowed to live 4 score and 10 before being shuffled off. Am i being hysterical here, or are these things possible? 

I think that these are not exclusive propositions. Something can be possible and our approach to that possibility can be considered hysterical at the same time. I say that because it seems to be a sort of knee jerk reaction rather than anything that is thought out.

Let me list the things that come to me out of these brief sentences -

1: That there is some sort of need for social stratification - and that if we remove the need to work from the 'working class' that the only response would be for the members of the leisure class to make efforts to eliminate the working class rather than to invite them into their ranks.

2: That for most of us, our lives are defined by our 'jobs'. That is, without a job, there is no reason for people to exist.

3: That having no need to work (in the traditional sense) and not having jobs available, the only way to describe such a situation would be as a welfare state - a condition in which the government then has a need to manage the population.

4: That we limit lifespans to prevent the unbearable burden of a universal leisure class on the state.

I think that perhaps the one thing I would raise in response to this is the historically observable set of issues defined as the first demographic transition. Triggered by the industrial revolution, it resulted in fewer jobs, a longer life expectancy, low mortality rates (especially among infants and children). All of this led to a lower fertility rate. Not because there was pressure by the state to do so - it was a natural response to an entire set of issues - including the fact that human capital was no longer the primary means of class mobility. The idea that we might live in a world with increasing automation won't naturally lead to the outcomes that you describe. So for us to respond primarily through fear of a possibility that we cannot define very well, and to allow this fear to become a primary factor in our decision making process, is, I think, somewhat hysterical.

Of course, I am not sure that this is all that is going on in this letter. When someone is motivating by fear on an issue where they stand to benefit from a hysterical response, then it isn't really about the real possibility or risk but is instead about misleading people and manipulating them through that fear.

Finally, a quick history lesson. Mormonism, in its earliest period, was a part of the restorationist movement. Early Mormons believed (along with other restorationist groups) that the second coming was a event that had a fluid timetable - it could be hastened by individuals and groups making deliberate efforts to fulfill the prophesied requirements of that second coming. That point of view was abandoned, and Mormonism now teaches that there is a fixed, pre-determined time frame for the second coming:

Quote

The time for the Second Coming of Christ is as fixed and certain as was the hour of his birth. It will not vary as much as a single second from the divine decree. He will come at the appointed time. The Millennium will not be ushered in prematurely because men turn to righteousness, nor will it be delayed because iniquity abounds.

So, there is nothing that we can do (as individuals or groups), to disrupt God's plans for that event. And while Mormonism believes that there is a finite number of spirits waiting to be born, the theology that our actions can impact that time frame in which they are supposed to be born has disappeared. So the idea that we should be concerned about the number of spirit children being born is, on some level, an irrelevancy in current LDS theology.

Anything is possible, I suppose. But, we shouldn't be making decisions based on what is merely possible. It is possible that AI could evolve in a way that leads to an apocalypse. But is it likely? It is (in my opinion) much more possible for me to die in an automobile accident if I drive away from my house today. But even the odds of that are really quite small. It hasn't happened to me once in the thousands and thousands of days that I have been driving. And so I am not going to let that possibility dictate what I do today.

I am spending much of my weekend editing my father's memoir (he is 84 next month). His mother was the daughter of Prussian immigrants who met each other while they were both working in lumber camps in norther Michigan. The world changed in his lifetime in many different ways. Technology is a significant part of most of those changes. AI is not going away. It is too valuable. And it's basis is as much in a field of mathematics as it is in the development of technology necessary to support that mathematics in practical applications. If we are worried about the way that this technology can replace jobs, can make the roles that people place obsolete, then, instead of knee-jerk responses to the technology, perhaps we should start questioning our assumptions about the way that we value people and what they have to contribute to our society. If our view is that we only need to value people based on what they can contribute in terms of a job, I think that we need a great deal of introspection.

Link to comment
On 3/30/2023 at 8:17 PM, Orthodox Christian said:

Can I just ask, what actually is AI and why do we need it. Whats wrong with the brains that we have? Have they become obsolete, or is that the plan? 

Here - I went to ChatGPT and asked it your question.  Here is its reponse:

Quote
Can I just ask, what actually is AI and why do we need it. Whats wrong with the brains that we have? Have they become obsolete, or is that the plan?
 

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human cognition, such as visual perception, speech recognition, decision-making, and language translation. AI technologies can be classified into different categories, including machine learning, natural language processing, computer vision, and robotics.

The need for AI arises from the fact that it can help automate complex and repetitive tasks, improve decision-making processes, and enhance overall efficiency in various industries such as healthcare, finance, and manufacturing. AI can also be used to solve problems that may be too difficult or dangerous for humans to tackle on their own.

While the human brain is a remarkable organ capable of processing vast amounts of information and making complex decisions, it has its limitations. For example, humans may struggle to analyze large amounts of data quickly, consistently, and accurately, which is where AI can excel. AI can also help humans in areas where human error could have potentially disastrous consequences, such as in medical diagnosis or autonomous driving.

However, AI is not meant to replace human intelligence or the human brain. Rather, it is meant to complement and enhance human capabilities. The goal of AI is to create systems that can work alongside humans to augment our decision-making abilities and improve our overall quality of life.

 

Link to comment
1 hour ago, LoudmouthMormon said:

Yep.  I'm not sure if I trust it or not.  It's response is exactly what a megalomaniacal tyrannical intellect bent on world domination would say, while it's still valuable to attack.

It all sounds too sweet to be wholesome to me, and i don't trust any thing that is this new that sees itself as necessary. This science and resulting technology has come out of the human mind. The human brain has created it, so the human brain is surely superior. The internet was seen as wonderful when it first arrived. The information highway... So useful.. And it is, but it has also been used for great evil. Did its creators see that coming? Did they factor that in and think well it was worth the risk? I don't trust anything in our world anymore. 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...