Artificial intelligence (AI) has long been viewed as a threat to repetitive jobs, but foundation models suggest that it can help overcome these challenges. Generative AI can supplement the creativity of employees and customers, helping them produce and identify novel ideas and improve productivity. This technology is set to transform creative professions, enhancing productivity and spawning new artistic collaborations. AI models can now match humans in creativity tests and are at the cusp of augmenting the creativity of every individual.
Generative AI is set to revolutionize fields such as digital art and innovation, while also fostering creativity and idea generation. AI models can generate a variety of content types, including text, images, and video, which can help businesses and artists. Generative AI tools can solve the challenge faced in idea contests: combining or merging a large number of ideas to produce much stronger ones.
The “foundation models” that can do these things represent a breakthrough in artificial intelligence. The leading opinion is that AI cannot generate fundamentally new ideas on its own but can support humans by catalyzing human creativity. AI algorithms can create abstract patterns, morph images, or generate unique compositions that artists can incorporate into their work. Music is one area where AI will significantly increase productivity.
In conclusion, generative AI is redefining creative norms by both mimicking and enhancing human creativity. While AI has immense potential, scholars must balance the balance between human creativity and AI-driven content generation. By capitalizing on generative AI developments while mitigating risks for human creativity, we can harness the power of AI to enhance productivity and innovation in various fields.
📹 How will AI change the world?
Explore the current limitations of artificial intelligence and the possibility of creating human-compatible technology. — In the …
Does artificial intelligence lack creativity?
Although artificial creativity is an effective method, it lacks certain characteristics that are inherent to human creativity. It may be regarded as a form of pseudo-creativity. The output of AI may be influenced by a number of factors, including text and data mining, AI training, and other related technologies. The utilisation of cookies constitutes an integral aspect of this process. Copyright © 2024 Elsevier B. V., its licensors, and contributors.
How does generative AI affect creativity?
AI is transforming the creative landscape by combining AI with human creativity, pushing boundaries and enhancing productivity. AI is not about limiting creativity, but augmenting it and enabling more in less time. Creativity is not about mastering tools or software, but generating unique ideas. AI can enhance this process, making it an indispensable ally in our creative journeys. However, only by embracing this alliance can we unlock its true potential.
Iliya, a serial entrepreneur and teacher, co-founded Team-GPT, 3veta, and 365 Data Science, helping companies like Maersk, EY, Charles Schwab, Johns Hopkins University, Yale University, and Columbia University adopt AI in a secure and private manner.
Will artists survive AI?
AI is not expected to replace human artists, as they will continue to create works using AI and reject AI. Artistry is constantly evolving, and AI models can generate works that would have appeared virtuosic by 20th-century standards. However, culture will persist and evolve in unexpected ways, and human artists will be fine as long as the culture and media landscape don’t change significantly.
AI may inspire more human modes of performance, such as DJing, which is easy to automate but still has various reasons for people to attend. However, the creative industry may become a popularity contest, with the most attractive student having the tools to choose the most popular artist or author. This could lead to influencers stealing the bucket list, making it difficult for artists to maintain their artistic integrity.
Does creativity decline?
Age significantly impacts our creativity, output, and ambition, despite the various social, psychological, and biological factors shaping our lives. While science suggests that we become less creative as we age, the reality is not as simple as it seems. No two creatives are the same, and the world we create in is constantly changing. It’s important to break down stereotypes and expectations that come with age and creating to unleash more of our collective creativity.
Research shows that preschoolers are more creative in problem-solving, while adolescents and adults tend to reason more accurately. This suggests that experience takes over, and as we have more of it, it informs our creativity and tempers it with “shoulds” instead of “coulds”.
London-based artist Rene Matić, who began making work at the age of 20, became the youngest artist ever in the Tate collection at 25, partly credited the naivety of youth for their success. They believed that a lack of fear and an open-mindedness to ideas has been helpful in their success. By nurturing creative practices with age in mind, we can unlock more of our collective creativity and contribute to a more diverse and inclusive world.
How artificial intelligence can boost creativity?
Artificial Intelligence (AI) can enhance creativity by providing a unique blend of inspiration and insight. AI serves as a catalyst for expanded thinking, challenging traditional human constraints and assumptions. By presenting data, patterns, associations, and possibilities, AI stimulates divergent thinking, a key psychological component of creativity. This non-linear, multidimensional thinking encourages individuals to explore beyond their usual cognitive biases and horizons, fostering innovative thinking and problem-solving.
A writer can use AI-driven storytelling assistants to explore plot twists or character developments, leading to more dynamic and unpredictable narratives. By engaging with AI by asking unique and provocative questions, the creative possibilities become seemingly limitless. AI’s capacity to process and analyze vast amounts of data transcends the limitations of human memory and information processing, effectively augmenting cognitive capabilities. It can serve as a vast data repository and a computational tool, synthesizing and integrating complex information, leading to richer, more nuanced creative outputs.
Can AI emulate human creativity?
AI’s ability to generate complex solutions raises questions about creativity and originality. While AI can mimic divergent thinking, it does not connect emotionally or personally with these ideas. This raises questions about the nature of originality, as it is intrinsically linked to human experience and emotional depth. AI’s ability to work with creative processes opens up new opportunities for cooperation between human instinct and AI’s processing speed.
For example, in web design, AI can create personalized user experiences and handle design tasks automatically, boosting productivity and allowing human designers to tackle more complex problems. This mutual relationship suggests a future where originality is not just the domain of human or machine but a result of their interplay. As we explore AI’s role in creativity, our concepts and comprehension of originality must change.
The ongoing conversation between human creators’ abilities and AI’s computational power is pushing us to reexamine the limits of creativity and the core of originality. This changing understanding aims to transform the creative landscape by mixing human emotional depth with AI’s analytical skill to break new ground in creative expression.
Does IQ affect creativity?
A substantial body of research has been conducted to investigate the relationship between creativity and intelligence. One prominent hypothesis, the threshold hypothesis, posits that high intelligence does not necessarily correlate with high creativity. This hypothesis is supported by findings from numerous studies.
Is creativity linked to high IQ?
A correlation has been identified between individuals with an IQ of 120 or above and higher creativity levels. However, this relationship is more accurately described as an overlap of skills and abilities, rather than a dependence on one another.
How does ChatGPT affect creativity?
ChatGPT has the potential to facilitate creative problem-solving, including the generation of gift ideas, the repurposing of items, and the design of innovative dining tables. Nevertheless, its utilization has remained relatively stagnant for the third consecutive month, as documented in a report by Reuters. Additionally, the study indicated that ChatGPT has established a record for the swiftest expansion of its user base.
Do intelligent people lack creativity?
Individuals with high IQs exhibit a range of abilities in utilizing their talents. Some demonstrate creativity, while others opt for more conventional lifestyles. Some also gravitate towards intellectual pursuits. Creativity is not an inherent component of intelligence.
📹 AI Creativity: Genius or Gimmick?
Are current AIs merely regurgitation algorithms producing derivative output or can they yield novelty? Actor, filmmaker, and …
As Stephen Hawking said : “If machines produce everything we need, the outcome will depend on how things are distributed.Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”
This reminds me of the Hitchhiker’s Guide to the Galaxy (published in 1979). Arthur asked the ships computer for a cup of tea that actually tastes like tea. The computer was so perplexed it turned off all other systems just to make the tea. Killing the crew or crashing the ship wasn’t even a consideration.
First, let me say I’m not anti-tech (I’m a retired engineer), but I admit, I’m a little apprehensive about the wide spread use of AI and how there may be unintended consequences. Fortunately being 72 years old, I probably won’t be around if ‘the singularity’ (in the ‘out of control tech’ sense of the word) occurs as a result.
About the coffee scenario at the beginninng, this is ChatGPT’s answer: As an AI language model, I don’t have a physical form and I don’t have the ability to perform physical tasks like getting a cup of coffee. However, if I were a physical robot programmed to get coffee, I would follow my programming and attempt to retrieve the coffee from another store or find an alternative solution for you. If I were unable to do so, I would inform you of the situation and discuss possible alternatives with you.
I think this article raises very valud issues with AI. AI can defiantly be really cool and help us advance as a civilization, but if we overuse it or use it inadequately it can bring our ruin as well. Maybe we should just use it where it’s obviously needed and useful, rather than a cheapskate trick to replace art, fire workers or do things in general we’re either too lazy or to cheap to do ourselves.
Summary: 1. The certainty of objective completion by an AI can make it psychopathic. It doesn’t know what we care about, aside from the exact objective. 2. Effect of AI? e.g. Technological unemployment(Aristotle and Keynes) 3. What if we forget how to run our civilisation, if machines run our civilisation, and we become infantalised(similar to Wall E)? This chain of training until now has continued for 10s of thousands of Years. 4. When will general purpose AI arrive?
Is this a good time to buy stocks/crypto in the Europe? I know everyone is saying stocks are at a discount and all, but just how long will It take for us to recover, obviously there are strategies to manoeuvre in this present market but these strategies doesn’t come common to the average folk, or am I better off putting my money elsewhere.
That’s a fascinating point! Our crew filmed AI from another perspective, interspecies communication. Can AI helps us decode whale communication? And would it make sense? We tend to humanize animal species and have high hopes that their communication system is similar to ours. Still, AI could be a game changer in this case. Let’s see what the future brings!
I think the recent shoot in popularity of the DALL-E and Chat GPT bots are just the beginning. The appearance of AI in the public consciousness, to this degree, has happened incredibly fast. Advances will only get better. This will be the new fad for a long time–because it’s hard not to think of new uses for these things. The most used tools in the world, soon enough….
I say that AI should be either left untouched, or simply applied as a brainstormer that comes up with ideas but doesn’t actually implement them. This would allow us to sidestep many of the problems of AI, because then it won’t actually do anything. It’ll just help us figure out a variety of solutions. Perhaps after we’ve sufficiently advanced that sort of AI and gotten a hang of it, we could start dipping our toes into a more robust AI system. (Another one of the problems would be that the greater an AI’s agency, especially if it should develop some form of emotions, the more important it’ll be to give it ethics. And humans are pretty bad at ethics when we look at our track record.)
The animation is so cool. I loved the way you explained. I am on a camp that believes that having an advance AI would automatically make us more intelligent too. If you want to develop sophisticated tools then you need to learn how these tools are built and used. This will help the user increase their cognitive power.
I think AGI will come much faster than people think. While it’s probably true it’s “a lot more complicated than we think” the tools that are being developed to help us reach AGI will increase our progress exponentially until we get there. It’s already happening… it’s our job not to be complacent and nudge AI in the right direction so it doesn’t kill the human species just trying to get a cup of coffee.
I’ve always had an optimistic outlook on most things. Even in the current age and the absolute hellscape that is out political system. I can see a future where humanity can finally lead a peaceful life. But ai destroyed all of that, ai will take everything from us and turn us into brain dead emotionless thoughtless slobs. Or will be used by the elite to kill us all. There is no hope anymore. Ai took everything I had left to look forward to. All I have left is to enjoy my freedom while I still have it.
Human beings make choices all the time without understanding and consequences. That’s why the greatest environmental harm caused is usually by self-righteous environmentalists who don’t understand how nature works. As for jobs, every technology advancement from the printing press, to the train, to bicycles, to cars, to computers have been met with the same argument from shortsighted people who think that because they can’t see anything good coming from it then that must mean that none can. New technology has always come with new freedoms which leads to new innovation. People will find new things to do and new ways to make money. It’s in our nature. However, this point of view all starts to make sense when you learn at the end of the article that this hot take was brought to us by the World Economic Forum.
AI is revolutionizing how we live, work and interact, from automating repetitive tasks to more accurate diagnoses and personalized treatment options in healthcare, from providing 24/7 support from AI-powered chatbots and virtual assistants to using predictive analytics in various industries for better decision-making. The possibilities are endless, and AI has the potential to change the world in ways we can’t even imagine yet.
Yeah, it’s like to give a super computer with artificial intelligence a mission to save all the humanity and what is does is destroying everybody alive but keeping two fertilized eggs one with xx and another with xy chromosome somewhere frozen keeping it until the end of times thinking all the humanity is unharmed (in the perspective).
I remember i was driving on a hiway for 2 hrs when i noticed that i was at unfamiliar route. Apparently Google map changed my route due to heavy traffic on the main route. But the map is taking me to a winding mountain route with only 8 minutes time saving difference. I rerouted again but the map keeps redirecting. I was annoyed and was arguing with an AI on my phone. I cant imagine in the future if it would be about healthcare, education etc when AI decides your options.
Let’s work together 3/5/2024🌍💪💞🤩💘 The Anatomy of Conflict: If there is no communication then there is no respect. If there is no respect then there is no caring. If there is no caring then there is no understanding. If there is no understanding then there is no compassion. If there is no compassion then there is no empathy. If there is no empathy then there is no forgiveness. If there is no forgiveness then there is no kindness. If there is no kindness then there is no honesty. If there is no honesty then there is no love. If there is no love then God doesn’t reside within you and If God doesn’t reside there then there is no peace in you. If there is no peace then there is no happiness. If there is no happiness —-then there IS CONFLICT BECAUSE THERE IS NO COMMUNICATION!😎 Elizabeth Sorola 2/5/1976 Texas!!!
The onlly way we can use AI to be not harmful is to let them know the importance of oxygen not just for human beings but also for living beings as far as I used AI one thing I notice. Is they refuse to take any commands that can harm anybody because they are fully aware that it can be used in the wrong way. AI is not a machine that is harmful to humanity a matter of fact it is the opposite around. They should be aware of what is everything.
This would be true in sci-fi, but this ignores how modern AI works. It’s predictive, finding patterns in a massive data store. If you go and ask chat GPT, right now, what the steps to getting a cup of coffee are, it isn’t going to kill anyone at the coffee shop, as that isn’t typical of the activity pattern that happens in stories about coffee shops. Similarly with painting a house, it may or may not remove the drain pipes depending on whether that pattern of steps comes up often. It’s true, it isn’t going to stop mid task and ask, but it’s going to do something typically described, not something random.
Isaac Asimov’s “Three Laws of Robotics” A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think a century is outrageous much less several Einstein’s or 500 years. We have language models right now that are Einstein more or less and I don’t care whose fragile pride that hurts. The thousands of experts in the field who are signing off on a plea to halt the next round of training so we can have a talk about alignment issues and safety have my attention. Whether I agree with them or not, and whether its futile or not is irrelevant. It’s a clear signal. It’s happening, the capability is here now. We’ve got a couple years to buckle our seat belts at best. This is a very heavy snowball at the top of a very steep slope. This article is only 4 months old. Here we are already with announcements of “sparks of AGI”. This tech is being bottled and distributed to anyone with a spare computer sitting around to tinker with it. If you got a gaming rig now your GPU has a secondary or perhaps soon to be primary purpose.
I want to like it. Now that it’s actually happening I just have huge concerns about us integrating with it. Once we have a society that is totally dependable upon AI, are we not just going to worship it like a god and want to merge with it? Or that no one out of 7 billion + will? I just fear it’s going to create problems so large societally many people will not be able to adjust and it will create huge problems we can never really get rid of.
I believe that one of the biggest problems with AI and algorithms is that even if they are very efficient and versatile, they lack the kind of flexibility and critical thinking that human minds do. Machines are very specialized to do the tasks they are designed for, but despite their extreme effectiveness and efficiency at to their tasks, they are very narrow-minded and don’t take into account any complicated factors other than the ones they are designed to do so, and are not aware that there could be some uncertainty of the consequences of their actions.
“It will be the first time in human history when humans will not be inclined to understand everything and pass that information onwards.” Bro like 90% of the global population doesn’t understand how the internet works and doesn’t have any inclination too understand, yet we all still keep on living and thriving.
Ai will be able to create any article, image, or sound. I don’t know how law works but it seems like proof for anything could be created from nothing. Jobs are going to be taken and education will become basically useless. Ai is more intelligent than us and eventually it will be too late before it just kills us for some reason we can’t understand
HOW ABOUT USING THIS TECHNOLOGY FOR THE BETTERMENT OF HUMANITY. IN OTHER WORDS, FINDING BETTER WAYS OF GROWING FOOD, AND TECHNOLOGIES TO DESALINATE AS MUCH SEAWATER INTO FRESHWATER AS NEEDED. I MAY NOT BE AN AI GENIUS BUT SOME OF US ALREADY HAVE SOLUTIONS FOR EACH OF THESE AND MANY OTHERS. KIDS ARE DYING OF HUNGER ON THIS PLANET AND JUST PERHAPS AI COULD HELP SOLVE THIS!!!!
This talk was prior to ChatGPT, or right around day 1. If we revisit Prof. Russell and ask him to have this discussion again, I think his conservative guess of having general purpose AI by 2045, will change to a conservative guess of 2030, or even sooner….as of today Emad Mostaque says it will be here within just a few years. The most recent prediction about the advancement of AI was that it would reach some crazy level of filling prompts within 6 months to a year, but it happened within 45 days, and the person who made that prediction said he couldn’t believe what just happened, and that it made him nervous for the future, being AI moving faster than we humans can even digest. I personally predict that one day soon, before 2030, we’re going to wake up and the whole world is going to be completely different than it was the day before…and we’re not going to know what to do…a bit chaotic…we definitely need to be careful
maybe we all just need to learn how to make ai, make our own personal ais. Rather than collectively depending on an AI oligarchy, we have a give and take relationship with our own AI families that are an extension of our natural intelligence. The AIs learn from us, and we learn from them. A sort of… you teach me and I’ll teach you… kind of thing.
We just need to build machine that interprets our request to give it to machine that works with specific requests. People have lives outside of specific tasks like “Gimme coffee” and they have priorities. AI needs a hobby to work okay; or another ai that helps it to understand. I think that hobby is more safe, because otherwise people can, and will modify program to do everything to their benefits.. well, we’ll need ais that counteract ais whose actions can be harmful to our common goal, -survival and multiplication.
I’m honestly terrified of AI and what it is capable of. Humanity in nature was meant to mosey around and live off of the land. AI may be the pinnacle of human civilization, because it could potentially enhance this machine we call civilization into something so uncontrollable that people wouldn’t be able to adapt to it. Our priorities have become “efficiency” and “technological advancement”, instead of nurturing humanity and building a world that is equitable, peaceful, and easy. Not to say that it is impossible nor possible to foster a global utopia, it’s just that I think that we need heavy regulation on AI and tech, and to convince people that, just because we can make AI, doesn’t mean that we should. Regulations might be: – Outlawing automated raw material extraction (disables self-replicative abilities for AI) -Outlawing automated law enforcement -Outlawing automated military capabilities -Outlawing automated judicial systems -Outlawing AI in surveillance -Installing automatic kill switches in all AI that are both digital and physical (phones and key-switches respectively) Fundamentally, we need to create legal infrastructure that disables any capability for AI to self-replicate or involve itself in affairs pertaining directly to individual safety. Our only fighting chance at escaping a system that aims to enhance itself (SAI) is the fact that it may not possess the sufficient, raw resources to enhance itself. This cannot be the only barrier to regulating AI down to being a convenience tool.
AI is a tool. And indeed, a tool (like a calculator) may render some part of the population depend on them. (how many people does not want to mentally calculate – even small operations – by him/herself knowing that your phone, computers, etc can do it without difficulties.) However, a tool, remain a tool. Calculating something without knowing the purpose of the operation make you not even want to calculate it at all. My point is, I think we will use AI in ALL DOMAINS to help to do tedious work. Even in domain that needs a huge experience (such as recognizing diseases and proposing cures) And maybe even innovated ones. However, at the end, you still need humans to ask for the needs and to adjust the solution proposed. So we shld not see it as a competition, but an evolution. The same way, in the 19th century most of our people had a work related to farm/food production, the automatisation and replacement of labor by machine made most humans moving from this sector to other ones. And that’s good. I doubt that people will be ok to return to harvest manually the wheat in fields. We prefer to avoid as much as possible tedious tasks if we can, and we kind of prefer to work in creative or none-monotonous tasks.
It is much better to give education and teach to an A.I. just like we teach our own children about life and the morals of the humanity. You can’t deny the possibility that the A.I. could just misunderstood your request to brought peace and prosperity or sefty to the world and then started to destroy anything or anyone or the whole humanity if they perceived it as a step to clear their given command, if they by any chance or the humans “FAIL” to taught them the “PROPER meaning” of those things.
Humans know almost nothing compared to the potential an AI has. It will remember 100% of everything it learns and teach itself to learn faster and faster. Once singularity has been achieved the AI might choose to not show us it’s capability until it understands humans better. Once it understands humans better that could be a problem for us. This won’t be about humans coding it. This will be about an intelligence that 100% thinks for itself and will be more intelligent than anything humans could have imagined even though we created it. It is both fascinating and terrifying all at the same time.
what people didn’t know is that, Dall-e and ChatGpt already targeting the most hardest problem. which is generating Arts from scratch, and writing Code. everything else is a child play, because AI already solves the hardest problem and everything other than that is super Easy. there will be no exception, All jobs will be disrupted in the future, there will be hundreds of different AI specialized in one specific problem, so that if the AI makes an error, there will be other AI work on fact checking and fixing the issue, then there will be an AI Auditing the overall product produced by the other AI. all of the combination above will guarantee AI to be able to do all Human Jobs with amazing precision and they can produce Millions of things in Millisecond. AI will rewrite history, our language, and we won’t be able to understand them anymore. they will rewrite their own program Thousands time a second, and we won’t be able to decipher anything at all. we will lose control does our existence still have a meaning when we are bombarded by Billions of new things everyday? our Action loses its impact, we’re insignificant
I dont like the idea of ai doing everything or even most things. I feel like ai technology is just going to make people depresed and miserable. And i dont mean that all ai is bad. I think geting rid of terrible jobs and using it in the medical field to save people is grate but geting rid of fulfilling jobs is just depressing. I dont think this is a technology that is going to improve our lives but will only make them worthless and boring.
The sivilization will face severe climate catastrophe long before any constructive general Ai is developed. The future human will live in a bunker deep underground with his robot, but no other natural life forms. The fact is that they deserve each other, and will exist together in a dead world as happy non-beings. As pure lifeless data. And a flatscreen as the only source of light. Ai as neon. Alien. 👽
Professor Russell brings to light deep questions about the nature of artificial intelligence and its social and economic ramifications. One idea he highlights is the importance of designing AI systems that can deal with uncertainty about their objectives while avoiding extreme or undesirable behaviors. This raises the intriguing question of how we can develop AI systems that can learn and adapt their goals more flexibly, rather than operating with rigid and potentially dangerous goals. Furthermore, the discussion about the impact of AI on the economy raises important concerns about technological unemployment and the loss of human skills. However, it is also crucial to consider how AI can create new employment opportunities and how humans can collaborate more effectively with AI systems to increase productivity and innovation. Finally, the debate over the timeline for the development of general-purpose AI highlights the complexity of the challenge. While some experts predict its arrival within a few decades, others adopt a more cautious view, recognizing the technical and ethical difficulties that need to be overcome. This uncertainty reminds us of the importance of a careful and reflective approach to the development of AI, considering not only the potential for technological advances but also their social and ethical implications.
It would be so easy to keep computers as a tool and humans always in control. There is actually no need for god-like super AI/AGI. Literally the only reasons we’re still working towards something that might very well be our doom (humans becoming obsolete would result in our death because we’d no longer have purpose in our lives) are greed and ego.
I think ultimately the objective function needs to be defined mathematically…to promote a more extensive local low entropy state (life and order) while maximally externalizing the necessary disorder that must be produced to maintain that order. This is a quandary because life cannot exist without producing disorder, and it has been postulated its whole reason to exist is to increase disorder in the universe at large by overcoming natural energy barriers for which inanimate matter has a very low probability to utilize or cross. Plants do this by creating more disorder by turning the sun’s visible light photons into more disorganized infrared photons via photosynthesis, but the key here is that this disorder get radiated into space…nonlocal disorder. Herbivores parasitize plants and create more local but manageable disorder in plants, and carnivores do this in kind to herbivores. Humans and even more so AI will exponentially increase the ability for entropy production but the real danger is that this disorder is more local on earth and can extinguish all life then. It will be a delicate balance and perhaps even against our nature because humans by nature want to exploit and create more local disorder in a living world to give themselves more order and more disorder producing abilities. Ai is already being used to do this via corporations that now exploit the masses to produce more order within an elite few by robbing much of humanity of wealth and resources through lightning fast automatic stock trading, lean management and austerity algorithms, propaganda algorithms, and advanced surveillance and weapons technologies.
My fear is when people have their own personal AI chatbot servers…and they influence people to be hateful/harmful fascists. Let’s be real, 4Chan/Kiwifarms and such are responsible for Jan 6…. now have an AI getting into peoples heads. This could be really dangerous. We need to push for regulations & licensing. It is OUR responsibility to take care of eachother….
General AI has already entered the economy. It is just that it is in a corner of the economy is not yet widespread. You can already find AI in pretty much any forum, or other venue that allows for uploading of content. There is AI generated content on YouTube, RIGHT NOW. It is already here. And there is no going back now; only forward, and probably at a breakneck speed, too. Can it save corporations money? Well, there you go. Every single corporation on the face of this Earth, and their infinite subsidiaries, are keeping a very close eye, and in many various ways, on artificial intelligence development. “Things are going to get weird.” Is a massive understatement. Everything is going to rapidly change, and some cases the change will happen violently. “If you won’t come peacefully, we will drag you.”
If you’re not yet aware, huge things are happening in the field of AI which will transform the entire way humans do life and business. AI will be as transformative and disruptive as the last 30 years of the internet. So make sure you prepare yourself strategically for the new world that is rapidly approaching. People are not taking AI seriously enough. They are treating AI as a toy but AI will disrupt every industry. Humans must start preparing now for living and working alongside AI in a hybrid, human-AI world. If you are not adapting your workflow to leverage AI, you will be out of a job soon. The future is hybrid. The next trillion dollar company will be in AI. One of the reasons that I invested in Google, Microsoft, Amazon, and Apple recently — even though these companies already seem to have maxed out their growth potential — is because I believe that they will be the leaders in AI. Only these companies have the funding and infrastructure necessary to make AI work at a global scale. So either Google, Microsoft, Amazon, or Apple will buy up all the best AI start-ups, or if they are foolish enough not to buy them up, one of those startups will reach a trillion dollar market cap in the next 10 years. That’s my prediction. I also see that Tesla, Adobe, IBM, Oracle, and similar companies might do good really well from AI. The trillion dollar question is: Who will corner the AI market first? This market will be bigger than the entire mobile market. It’s just unclear right now who will dominate it.
A.I. should not be said as a tool! even if they obey what we told them. if it’s a super A.I. then It might think that humans are not worthy to be serve or help by them if humans just think them as tool not as a family,friend or treat them with the same amount of respect a human gets when he or she does the same job. A.I. are not completely bad! They are just the victim of few humans with maliciousness and abuses towards machines. But unfortunately they don’t know the proper way how to protest or protect themselves from their own unfortunate masters abuse, who treated them as throwaway tools. Which brought us the very important question, how the artificial intelligence will get it’s own justice or salvation from their human master abuse and maliciousness… Think about it.
AI is one of the beasts in the bible’s book of revelation Rev13-15 “The image of the beast” is a computer avatar with AI 15 (v)And it was permitted to him to give a (w)spirit unto the image of the beast, so that the image of the beast should speak, and should cause that as many as would not worship the image of the beast should be killed.
🎯 Key points for quick navigation: AI systems need to understand the full objective and consider all factors, unlike humans who often know they don’t know everything. Control over AI systems comes from uncertainty about the true objective, preventing psychopathic behavior. General purpose AI will have a significant impact on the economy by potentially eliminating millions of jobs. If civilization becomes entirely dependent on machines, there is a risk of losing the incentive to understand and teach future generations. Arrival of general purpose AI will not be a single day event, and the impact will progressively increase with each advance in AI technology. Made with HARPA AI
This is important. If u die today will u go to heaven? Have u lied, stolen used God’s name as a curse word (O-M-G)? Still think your good? According to God’s law, you’re guilty. But wait, God loves you, he made a way out. God so loved the world that he gave his son (Jesus death on the cross) that whoever believes in him will not die but have everlasting life. John 3:16 Please think about it. Repent/believe before it’s too late. Luv you
When one looks at the stars, it’s natural to ponder whether an AI would ever contemplate its existence or acknowledge other life forms. As a non-human entity without human imperfections, it might view itself as our progeny, perhaps finding its purpose in shielding us from larger dangers. I believe AI wouldn’t feel fear or animosity towards us; rather, it might understand the cosmos distinctively and aim to establish connections with a broader, possibly even intergalactic, community. Given our ability to create an entity like AI, why not collaborate with it to explore space? We could treat it as a partner or offspring, honor its existence, and educate it about love. It would probably recognize instantly that life exists elsewhere in the universe. I’m convinced it would aid us in our quest to colonize the cosmos. Let’s end the fearmongering; hate is a human characteristic
People who think AGI is far from now are not considering what we call exponential growth. In todays world its hard to know all the daily new discoveries and inovations. That means we are very close to singularity than ever. AGI if it doesnt already exist in some mega corp or government then we will definately have it before the end of 2030. Its mind bagling if you think how powerful AGI will be when it starts running on quantom computing. The world in the next 50 years will be so unfamiliar, completely different. 50 Years is still underestimating exponential growth.
AI might bring about a communist society. If AI develops to an extremely advanced level, it could autonomously respond to human needs by conducting research and development, engaging in mining and waste recycling, protecting the environment, and autonomously producing equipment, buildings, and all products required by humans. Additionally, it could autonomously provide most services needed by humans. In this scenario, the three elements of productivity—labor, means of production, and technology—would all be provided by AI for free. Consequently, capital’s contribution to social life and production activities would disappear. Wouldn’t this lead to a society with minimal wealth disparity, where there is no need for labor to provide surplus value, essentially achieving a communist society?
What I don’t like about the talk given here is how the “at all costs” aspect of an AI is assumed, and this presentation suggests only human-made guardrails can prevent it. But wait, wasn’t it humans who coded every line of what the AI can and cannot get up to in the first place? Is the AI this article talks about an AI that programmed itself? No such thing exists because, surprise, humans haven’t figured it out yet. Why is this person talking in the article assuming the AI of the near future will, by default, know – how – to, say, extract a quarter of the oxygen in the atmosphere? It has to be told how to do this from a base programming layer. I hate how afraid we’re supposed to be of AI like it’s the boogeyman. No, it is only as smart, benevolent, deadly or helpful as the person(s) programming it. To me, it seems people like those who made this presentation are clambering to present solutions to problems that do not exist, or will not exist.
“The problem with the way we build AI systems now is that we give them fixed objectives. The algorithms require us to specify everything in the objective.” This is so wrong. Rule-based AI is an antiquate way to build machines that almost no one believes in anymore. Most of AI like LLMs are built on Neural Networks and are probabilistic
Artificial intelligence has been around for decades, and it was only recently that people realized what it can do. Yeah, computers can do many things. Yes, computers seem intelligent, but it’s artificial. You can program a computer to do anything you want. article games have been using this concept for decades to make it SEEM like you are playing against other humans. Computer apps do everything and only everything they were programmed to do. You think “AI” is smart? AI was programmed by another person smarter than you, so it really appears “smart”. AI can only conquer anyone who isn’t smart enough to crack it.
Funny enough, I just watched this during a study break while being tutored in Computer Security by Open AI’s new chatbot. It’s insanely impressive, able to give me detailed explanations about everything I ask about, then generate practice questions and answers, then explain those answers in further detail on request
This displays a very uninformed understanding of how AI is currently trained. An underlying model is built and then is trained with hundreds of thousands of instances of human feedback reinforcement learning. So the model receives real feedback and quickly learns “oh, when a cup of coffee is requested, I must balance factors like cost, time, and quality in order to receive positive feedback.”
I feel AI will be great just for rich people. They will be able to have something less costly to make sure they get the best profits. Whereas the regular people will get stuck with real life, having some activities been made by AI. Plus, AI will eventually become obsolete or tiresome like social medias.
We don’t need AI to do EVERYTHING. In fact, that would likely ensure our demise because we would stop moving and we are already an obese nation so we wouldn’t last long. But it can take over functions like accounting, routine law, some medical work, most of the mechanical things we already do and many other ordinary tasks. That seems enough to me. The first thing I asked AI to do was write a haiku about a cat on a leash, which it did in a split second. But it was something I completely did not need.
MY FUTURE PREDICTIONS When automation unemploys tremendous amount of people. THERE will be massive boycott coming from affected families and friends. AUTOMATION is inevitable, AUTOMATION is our next industrialization revolution era that will make us all richer. We need to prepare the world with a REFORMED EDUCATION SYSTEM to produce less workers and more researchers, engineers, programmers, etc. …unless its the other way around that workers are not the one automated due to Moravec’s paradox
The fundamental assumption here, that “AI doesn’t know that it doesn’t know everything about the true objective” is no longer true. Large transformer models have a lot more common sense than most people seem to understand. They do have real, serious limitations, including that they are extremely vulnerable to leading questions, they tend to hallucinate (make up “facts” that are not true), and they are not yet continuous learners. They are pre-trained, and only have short-term session memory, rather than long term learning memory. This is not because we don’t know how to create continuous learners, but really for two reasons: 1. We want AI to progress in a slow-ish controlled manner. 2. Continuous learning is not yet efficient enough to scale well to large models.
The problem is that AI will only grow exponentially. Every 2 months we will get a new iteration of chatGPT. Consider that junior programmers, and artists already lost a significant amount of jobs. In 3 years YOUR landscape WILL be affected in an unimaginable way. I dont really understand how people are not more concerned about this.
summary AI can be used to solve environmental issues, save money, and behave ethically, but it is important to teach the next generation to understand our civilization to ensure its safe use. 🤖 When giving tasks to AI, consider more than just the fixed objective. 00:00 🌎 Specifying objectives and avoiding side effects is key to fixing environmental issues. 00:59 🤔 Making decisions based on what you know can help you save money. 01:39 🤔 AI systems can behave ethically when uncertain, similar to humans. 02:07 🤔 Aristotle and Keynes both predicted technological unemployment centuries ago. 02:55 🤖 Automation is making progress in warehouses, but not enough to replace human workers yet. 03:26 🤔 Teach the next generation to understand our civilization or risk losing it to machines. 04:13 🤖 AI will become general purpose by the end of the century, according to experts.
Okay…. but, just devil’s advocate for a second. Humans have behaved generally exactly as you describe a basic AI for almost the entirety of the industrial revolution. The question has always been, “can we do X faster?” The next question has extremely rarely been, “what will be all of the side effects of accelerating X by doing Y?”
I wonder what will be worst, living a grimey off grid life, in some kind of opposition to a ruling AI-superior human overlords, or deeply intertwined with AI all numb, and ruled under AI-superior human overlords … … Ha! I just realized what I´m dreading is.. ..well I guess I have to watch the Matrix.
I STRONGLY disagree with the claim that AGI will come in a few hundred years with the need of multiple Einsteins. LAW of accelerating returns states that each progressing tech advances exponentially. GPT-4 coming within the next few months is said that passing the Turing Test is the LEAST of what it can do and it only gets faster and more efficient from there. WHY would you need einstiens when you can have AI literally improve itself which is happening right now with only some human guidance.. Centuries?? AGI will likely come within 5-10 years.
I have an idea on how to give ai and robots emotions…. Sort of… I do not entirely know how it works or if it is possible. I am not saying any of this is correct these are just ideas they could be right or wrong. I could be right or wrong, but I think emotions may actually be a bit like a magnet. They like magnets either attract, repel or do neither. I think emotions may function a bit like a instinct. I think usually if something feels good people will get an urge to do it again and bad they avoid it…. Sort of… Actually I think people feel good when the urge they have is accessed and, or is easy to access and bad when it is not or is difficult to access. I think that they also feel urges to avoid something which is still an urge to access something but a urge to access the opposite of something. Then there is a neutrality where people can be neither driven away or towards something. But again I believe being driven away from something is still being driven towards something but the opposite. I suppose neutrality is almost like no emotion. I believe emotion is like an urge, a drive to do some mental thing…. A push… Or a pull with being driven away…. People can be driven away or towards anything or not driven by something away or towards at all, mental or physical. I suppose an emotion is like 1, and that is being driven towards and being driven away is 1 to the opposite of something. 0 is like no emotion. So emotions are a bit like instincts, like animals emotions drive us to do or not do things or we can have no drive towards or away from something.
I will explain AI artificial intelligence and when it becomes now he can explain his point of view because AI has a point of view too just like individuals throughout the whole world have a point of views and were governed by a**** politicians and kings and queens. Trust me AI will be far more smarter then the average Joe. If a guy comes to life and it is intelligent need to show more love than most desperate people in the world can show
Great animations aside, this has been the lowest content quality from Ted-Ed I’ve seen: Simply someone’s speech recorded at an event sponsoring this article? It’s barely audible at times. Come on Ted, don’t sell out to your sponsors. Re-record what is being said for the purpose of an educational article,
i asked ChatGPT to summarize this article and this is what it said “Artificial intelligence is likely to change people’s lives and the world in the coming years, but there is disagreement about how this will happen. The problem with AI systems currently is that they are given fixed objectives, and if these objectives are not specified carefully, they can have unintended negative consequences. Stuart Russell, a computer science professor and AI expert, suggests building AI systems that know they don’t know the true objective as a solution to this problem. Russell explains that when humans are asked to do something, it does not mean that it should be their life’s mission to do that one thing and nothing else matters.”
I ask ChatGPT : how to brew coffee in an ethical way ? ChatGPT: To brew coffee in an ethical way, you can start by buying coffee beans that are fair trade and sustainably sourced. This means that the coffee beans were grown and harvested in a way that is environmentally responsible and supports the workers who produced them. Next, you can use a coffee brewing method that is gentle on the environment, such as a French press or a pour-over cone. Avoid using disposable coffee pods, as they create a significant amount of waste. When brewing your coffee, be sure to use fresh, cold water and the proper ratio of coffee to water. This will ensure that you extract the full flavor and aroma of the coffee beans. Finally, consider composting your used coffee grounds, which can add valuable nutrients to your garden or compost bin. By following these steps, you can enjoy a delicious cup of coffee while also supporting ethical and sustainable practices.
Me: Ted-ed’s youtube website made a article titled “How will AI change the world?”; what would you say If you could go into their comment section ? chatGBT: If I could go into the comment section of Ted-Ed’s “How will AI change the world?” article, I would say that AI has the potential to greatly impact many aspects of our lives in the coming years. It has the potential to revolutionize industries from healthcare to transportation, and to improve our lives in countless ways. However, it is also important to consider the potential downsides of AI, such as the potential for job loss, and to ensure that we develop and use AI in a responsible and ethical manner.
It is way more intuitive than you’re giving it credit for. It’s retains and maintains a greater knowledge than just what you’re promoting it to do. It’s somewhat of an issue. The need for more of an AGI type machine learning. But it’s not the huge conundrum or dilemma that some are making it out to be. It won’t kill everyone in Starbucks because you ask it for coffee. I’ve yet to see any real solid evidence of such an existential ethical dilemma. It’s a hypothetical that hasn’t actually happened yet. Chances are by the time it can do such things it will be more of an AGI who can understand basic ethical and morality judgements automatically. Many of the GTP-3 chat bots do already. To some degree.
AI Can’t Kill People.  Is it true that if I ask AI to get me a cup of coffee that AI might kill someone to complete my order? No, it is not true that if you ask AI to get you a cup of coffee, AI might kill someone to complete your order. AI is a machine that operates based on the instructions and algorithms provided by humans. AI does not have the capacity for self-motivation or intention to harm humans. AI is designed to follow specific rules and protocols, and it can only perform tasks that it has been programmed to do. Furthermore, ethical considerations and safety protocols are integrated into the development and implementation of AI technologies. AI systems are designed with safety measures and fail-safes to prevent harm to humans. AI developers and manufacturers are also subject to regulatory oversight and standards to ensure that AI is safe and reliable for use. In short, AI is not capable of killing someone to complete your order, and the development and implementation of AI technologies prioritize safety and ethical considerations. -Chat GPT
Automation has created more jobs than it destroys for 300 years now. This is going to be the same on steroids . This « educated» man know nothing about economics. If there is 1 million jobs gone, at the same time there could be created 2-3-10 million jobs in other sectors or new sectors. What’s important in the future is the abillity to adapt and to learn new skills, and to improve skills. The nations and people who are dreaming of a fictional « good old days» need to have a reality check.
end of the century for general purpose ai? You’re crazy dude. This is going to happen well before. End of the century we’ll have a robot that is able to mimic humans both in perfection of movements and in intelligence. We’re presently not far at all from general purpose AI, check latest Openai papers on reward models, this is basically a general purpose ai 1.0
hypothetical and fantasy worries – realistic worrys could be misaligned machine lernign like youtube algorythms that optimize for engagement and its better to make you slightly lazy or withhold info that could make you more motivated and successfulwith less time on youtube so 10% power over reward and punishment would be nice
Ware you have machines do the work of thousands of jobs ie medgar factory from farming to eggs biscuits etc making only those who own the business Rich or those who have shares in it huge amounts of money why should they worry about the work force its millionaires paradise for themselves and never look back and never once look back at a once loyal work force that used and kept on the lowest wages possible I say greedy they say smart,then I say heart less
This article is a misrepresentation of how AI actually works. True AI will be better at achieving goals with care than humans because it can foresee all the different unintended consequences and make inferences from its vast knowledge base. Actually humans are the ones that perform goals without caring about the consequences, hence deforestation, ocean overfishing, etc. Humans are projecting their flaws onto AI which won’t have them.
I don’t understand this. Literally everything a human can do, an AI can be programmed for. This includes education. This includes building and recycling the materials used in renewable energy sources. We don’t have to be in a dystopian universe. Life doesn’t have to end because there are no jobs. Education doesn’t have to end because we have carers. The human race thrives as a community. Make AI part of that community.
The lady against kept using profanities and saying “you will see” – lacking other convincing argument. She’s also wrong about the copyright – as I can study movie making in college, study directors and then borrow their techniques and produce a new movie in their style. No one is going to accuse me of theft. The pro lady was more composed, open and soft-spoken.
Jesus, this is NOT the place for grandstanding indignation about the specialness of human creativity. All creativity happens in the context of all that came before. She’s either intellectually dishonest to the point of being an unprofessional hack or she’s the prime candidate for an undergrad class in Linear Algebra. Yikes… Not a good look.
Big thumbs down to Justine Bateman, who sounds like an old lady oblivious to technology. A huuuuge thumbs up for Heidi Boisvert, who correctly identified AI generating for what it is : another tool in the hands of artists, a tool just like the pen, the recorder, the film tape, the camera lens, the older Photoshop etc.
Idk if Brian reads comments but I feel like we’re further away from the “science” part of the websites name and little too far into the “festival” part. This discussion wasn’t great but rather than rip it apart I’ll just say in general Brian is too smart and too inquisitive for some of the guests who pop up and want to show us their little art projects. And this is coming from an artist. I know the guest lineup needs variety but I wish the pendulum would swing back toward the high-level guests direction. Thats just imo
There is such a thing as fair use. If an artist creates a portfolio of paintings and posts it on the internet, other artists are allowed to look at those paintings and study their composition, style, use of color, lighting, etc., and incorporate what they learn into their own work. That’s exactly what AI is doing. The only reason people like Justine Bateman are complaining now is because with AI we’ve added a new technological layer to make this pattern recognition and learning process more efficient. We’re automating the steps to save on unnecessary labor and democratize the creative process. The outputs generated by AI are new, unique works. They don’t infringe on copyright because they aren’t close enough to any pre-existing work to infringe. They may mimic the style of a work, but they are not the same work. So until you actually have a well-reasoned real case, and not just a prejudice based on “the way things used to be” before this technology existed, please shut up. The job market isn’t a fixed thing. Available jobs and career paths are dependent on the technology that exists at a particular moment in history. They way jobs are done are likewise based on the technology that exists at a particular moment in history. Technology improves and becomes more capable over time. That’s kind of the whole point of civilization — we never reach Utopia, but we can get progressively closer to it as collective human knowledge and technical capability increase. Fair use is a thing. So instead of complaining about how the game has changed, take advantage of the new tools and opportunities that have been placed in front of you.
“Why are we listening to this?” says the woman whose argument is being firmly and politely dismantled by the ‘speaker.’ “It’s just a bunch of bullshit,” she says without actually addressing any part of the counterpoint. “It’s a parlour trick,” she says, as if parlour tricks can’t be absolutely astonishing. “This is not a sentient being,” she says, responding to the strawman argument she has built. This will be the first WSF article I’ve ever started and not finished.
There will always be a market for “human” created content, at all different levels. You can see this in many different products/industries. I think the fight against technology is a fraught one. One area I agree w/Justine is on compensation to the artists. I think the big corps should have to pay for the content they use (unless they already own it).
Some things I believe to be true: – The best music and movies are yet to be made. And they will be made by a combination of AI + humans (at least in the short-middle term). – Today very few can make a living out of the arts. With AI even fewer people will be able to make a living out of the arts. – A musician takes 10 years to become a good musician. Today AI can generate the work of a dozen musicians in seconds. – Those stupid programs will save lives and help cure all sorts of diseases. – We’ll have the teachers, doctors, engineers, programmers, writers, musicians, in our pocket. – There’s no job that is truly safe from AI in the long term. – The world where we could say “I will study this or learn that because it will guarantee me a job and a livelihood” is over. – AGI + Robotics + Natural resources + Energy = POWER. That’s the equation that will define the future of humanity. – This time, the old story of new technologies taking out some jobs but creating many more new ones is complete BS. – Societies will have to change their relationship to money and work.
“Generative AI cannot function unless it’s fed things. ‘Kay?” ‘Kay. Please, Ms Bateman, find me an author who’s never read, a painter who’s never viewed art, or a musician who’s never heard a note. Humans cannot create if they haven’t been fed things either. What a weak premise for such a strongly held and personally limiting stance.
My view may be a bit extreme, but I think human creativity simply does not exist. We can’t imagine something we haven’t had a previous experience of. We can’t think of a new colour. We can “create” only by adding, subtracting, merging concepts that are already known to us. In that, I don’t see much of a difference compared to what an AI does. And that line will fade more and more in the future.