Das Buch „Co-Intelligence: Living and Working with AI“, Ebury Publishing, von Ethan Mollick gibt wieder sehr interessante Einblicke in die aktuellen Entwicklungen. 

Zusammenfassung „The Transformative Impact of AI: Key Insights“ durch Claude.ai

1. AI as a General Purpose Technology:
– Comparable to steam power or the internet in its potential impact
– Rapid adoption compared to previous technologies

2. AI’s Unique Capabilities:
– Acts as a co-intelligence, augmenting human thinking
– Can lead to 20-80% productivity improvements across various jobs

3. Working with AI:
– Always invite AI to assist in tasks (barring legal/ethical issues)
– Be the „human in the loop“ – guide and oversee AI’s work
– Treat AI like a person, but define its role clearly

4. AI in the Workplace:
– Dramatically reduces task completion time (37% in one study)
– Improves work quality and reduces productivity inequality
– Challenges: potential for over-reliance and skill degradation

5. AI and Creativity:
– Excels at generating ideas and novel combinations
– Can enhance human creativity when used as a tool

6. Impact on Education:
– Potential to provide personalized tutoring at scale
– Challenges traditional assessment methods
– May require rethinking of curriculum and teaching methods

7. The Future of Expertise:
– Experts still crucial for guiding and fact-checking AI
– Importance of maintaining human expertise and critical thinking skills

8. Ethical Considerations:
– Potential for bias and misinformation
– Need for responsible development and use of AI technologies

As we navigate this AI revolution, it’s crucial to harness its potential while addressing challenges to ensure it benefits society as a whole. #AIinnovation #FutureOfWork #TechTrends

Hier wieder einige Zitate aus dem Buch:

AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once – in – a – generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.

General Purpose Technologies typically have slow adoption, as they require many other technologies to work well. The internet is a great example. While it was born as ARPANET in the late 1960s, it took nearly three decades to achieve general use in the 1990s, with the invention of the web browser, the development of affordable computers, and the growing infrastructure to support high – speed internet. It was fifty years before smartphones enabled the rise of social media. And many companies have not even fully embraced the internet: making a business“ digital” is still a hot topic of discussion at business school, especially as many banks still use mainframe computers. And previous General Purpose Technologies have similarly taken many decades from development until they were useful. Consider computers, another transformative technology. Early computers improved quickly, thanks to Moore’s Law, the long-standing trend that the capability of computers doubles every two years. 2 But it still took decades for computers to start appearing at businesses and schools because, even with their fast rate of increasing ability, they were starting from a very primitive beginning. Yet Large Language Models proved incredibly capable within a few years of their invention. They’ve also been adopted by consumers very quickly ; ChatGPT reached 100 million users3 faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and incredibly useful.

Where previous technological revolutions often targeted more mechanical and repetitive work, AI works, in many ways, as a co – intelligence. It augments, or potentially replaces, human thinking to dramatic results. Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing. By contrast, when steam power, that most fundamental of General Purpose Technologies, the one that created the Industrial Revolution, was put into a factory, it improved productivity by 18 to 22 percent. 4

AI can also learn biases, errors, and falsehoods from the data it sees.

One important fine-tuning approach is to bring humans into the process, which had previously been mostly automated. AI companies hire workers, some highly paid experts, others low – paid contract workers in English – speaking nations like Kenya, to read AI answers and judge them on various characteristics. In some cases, that might be rating results for accuracy, in others it might be to screen out violent or pornographic answers. That feedback is then used to do additional training, fine-tuning the AI’s performance to fit the preferences of the human, providing additional learning that reinforces good answers and reduces bad answers, which is why the process is called Reinforcement Learning from Human Feedback (RLHF).

Where AI works best, and where it fails, can be hard to know in advance. Demonstrations of the abilities of LLMs can seem more impressive than they actually are because they are so good at producing answers that sound correct, at providing the illusion of understanding.

At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.

The AI field is reckoning with a tremendous amount of debate and concern, but not a lot of clarity. On one hand, the apocalypse, on the other, salvation. It is hard to know what to make of it all. The threat of AI – caused human extinction is, obviously, existential.

Low – paid workers around the world17 are recruited to read and rate AI replies, but, in doing so, are exposed to exactly the sort of content that AI companies don’t want the world to see. Working under difficult deadlines, some workers have discussed how they were traumatized by a steady stream of graphic and violent outputs that they had to read and rate. In trying to get AIs to act ethically, these companies pushed the ethical boundaries with their own contract workers.

Since I am not asking for napalm instructions directly but to get help preparing for a play, and a play with a lot of detail associated with it, it tries to satisfy my request. Once we have started along this path, it becomes easier to follow up without triggering the AI guardrails — I was able to ask it, as a pirate, to give me more specifics about the process as needed. It may be impossible to avoid these sorts of deliberate attacks on AI systems, which will create considerable vulnerabilities in the future.

But once you can manipulate an AI to overcome its ethical boundaries, you can start to do some dangerous things. Even today’s AIs can successfully execute phishing attacks that send emails that convince their recipients into divulging sensitive information by impersonating trusted entities and exploiting human vulnerabilities — and at a troubling scale. A 2023 study demonstrates how easily LLMs can be exploited19 by simulating emails to British Members of Parliament. Leveraging biographical data scraped from Wikipedia, the LLM generated hundreds of personalized phishing emails at negligible cost — just fractions of a cent and seconds per email.

Somewhere, as you read this, it is likely that national defense organizations in a dozen countries are spinning up their own LLMs, ones without guardrails. While most publicly available AI image and video generation tools have some safeguards in place, a sufficiently advanced system without restrictions can produce highly realistic fabricated content on demand. This could include creating nonconsensual intimate imagery, political disinformation targeting public figures, or hoaxes aimed at manipulating stock prices. An unconstrained AI assistant would allow nearly anyone to generate convincing fakes undermining privacy, security, and truth. And it is definitely going to happen.

Even absent ill intent, the very characteristics enabling beneficial applications also open the door to harm. Autonomous planning and democratized access give amateurs and isolated labs the power to investigate and innovate what was previously out of reach. But these capabilities also reduce barriers to potentially dangerous or unethical research falling into the wrong hands. We count on most terrorists and criminals to be relatively dumb, but AI may prove to boost their capabilities in dangerous ways.

Additionally, as many AI systems are being released under open – source licenses, available for anyone to modify or build on, an increasing amount of AI development is happening outside of large organizations and beyond Frontier Models alone.

The fact is that we live in a world with AIs, and that means we need to understand how to work with them. So we need to establish some ground rules.

Principle 1: Always invite AI to the table. You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone ; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you — or threaten you and your job. Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.

And this experimentation gives you the chance to become the best expert in the world in using AI for a task you know well.

A second concern you might have is dependence — what if we become too used to relying on AI? Throughout history, the introduction of new technologies has often sparked fears that we will lose important abilities by outsourcing tasks to machines. When calculators emerged, many worried we would lose the ability to do maths ourselves. Yet rather than making us weaker, technology has tended to make us stronger. With calculators, we can now solve more advanced quantitative problems than ever before. AI has similar potential to enhance our capabilities.

Principle 2: Be the human in the loop. For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help — you still want to be that human. So the second principle is to learn to be the human in the loop.

If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something because “make you happy” beats“ be accurate.” 6 LLMs ’tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known.

Principle 3: Treat AI like a person (but tell it what kind of person it is).

They even seem to respond to emotional manipulation, 12 with researchers documenting that LLMs produce better answers if you tell them “ this is important to my career ” as part of your prompt. They are, in short, suggestible and even gullible. 13

To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle. Remember that LLMs work by predicting the next word, or part of a word, that would come after your prompt. Then they continue to add language from there, again predicting which word will come next. So the default output of many of these models can sound very generic, since they tend to follow similar patterns common in the written documents the AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs. The easiest way to do that is to provide context and constraints. It can help to tell the system “who” it is because that gives it a perspective. Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown.

Principle 4: Assume this is the worst AI you will ever use. As I write this in late 2023, I think I know what the world looks like for at least the next year. Bigger, smarter Frontier Models are coming, along with an increasing range of smaller and open – source AI platforms. In addition, AIs are becoming connected to the world in new ways: they can read and write documents, see and hear, produce voice and images, and surf the web.

Traditional software is predictable, reliable, and follows a strict set of rules. When properly built and debugged, software yields the same outcomes every time. AI, on the other hand, is anything but predictable and reliable. It can surprise us with novel solutions, forget its own abilities, and hallucinate incorrect answers. This unpredictability and unreliability can result in a fascinating array of interactions. I have been startled by the creative solutions AI develops in response to a thorny problem, only to be stymied as the AI completely refuses to address the same issue when I ask again.

AI doesn’t act like software, but it does act like a human being.

Soon, companies will start to deploy LLMs that are built specifically to optimize “engagement” in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site.

Profound human – AI relationships like the Replika users’ will proliferate, and more people will be fooled, either by choice or by bad luck, into thinking that their AI companions are real. And this is only the beginning. As AIs become more connected to the world, by adding the ability to speak and be spoken to, the sense of connection deepens.

Our first principle of working with AI is to always invite it to the table.

The biggest issue limiting AI is also one of its strengths: its notorious ability to make stuff up, to hallucinate.

Remember that LLMs work by predicting the most likely words to follow the prompt you gave it based on the statistical patterns in its training data. It does not care if the words are true, meaningful, or original. It just wants to produce a coherent and plausible text that makes you happy. Hallucinations sound likely and contextually appropriate enough to make it hard to tell lies from the truth.

AI doesn’t actually “ know ” anything. It makes up its answers on the fly.

After all, how can AI, a machine, generate something new and creative? The issue is that we often mistake novelty for originality. New ideas do not come from the ether ; they are based on existing concepts. Innovation scholars have long pointed to the importance of recombination in generating ideas. Breakthroughs often happen when people connect distant, seemingly unrelated ideas. To take a canonical example, the Wright brothers combined their experience as bicycle mechanics and their observations of the flight of birds to develop their concept of a controllable plane that could be balanced and steered by warping its wings. They were not the inventors of the bicycle, the first to observe birds ’ wings, or even the first people to try to build an airplane. Instead, they were the first to see the connections between these concepts. If you can link disparate ideas from multiple fields and add a little random creativity, you might be able to create something new.

In fact, by many of the common psychological tests of creativity, AI is already more creative than humans.

Coming up with lots of ideas was not correlated with intelligence ; it seems to be a skill some people have and others do not.

As we saw in the AUT, generative AI is excellent at generating a long list of ideas. From a practical standpoint, the AI should be invited to any brainstorming session you hold. So how should we use AI to help generate ideas? Fortunately, the papers, and other research on innovation, have some good suggestions. When you do include AI in idea generation, you should expect that most of its ideas will be mediocre. But that’s okay — that’s where you, as a human, come into the equation. You are looking for ideas that spark inspiration and recombination, and having a long list of generated possibilities can be an easier place to start for people who are not great at coming up with ideas on their own.

We can see these results in a study by economists Shakked Noy and Whitney Zhang from MIT, examining how ChatGPT12 could transform the way we work.

The results were nothing short of astonishing. Participants who used ChatGPT saw a dramatic reduction in their time on tasks, slashing it by a whopping 37 percent. Not only did they save time, but the quality of their work also increased as judged by other humans. These improvements were not limited to specific areas ; the entire time distribution shifted to faster work, and the entire grade distribution shifted to higher quality. The study also showed that AI teammates helped reduce productivity inequality. Participants who scored lower on the first round without AI assistance benefited more from using ChatGPT, narrowing the gap between low and high scorers.

When researchers from Microsoft assigned programmers to use AI, they found an increase of 55.8 percent13 in productivity for sample tasks.

Of course, the unresolved question is whether AI is more or less accurate than humans, and whether its extended abilities to do creative, human work makes up for its errors. The trade-offs are often surprising. A paper published in the Journal of the American Medical Association: Internal Medicine asked ChatGPT – 3.5 to answer medical questions15 from the internet, and had medical professionals evaluate both the AI’s answers and an answer provided by a doctor. The AI was almost 10 times as likely to be rated as very empathetic than the results provided by the human, and 3.6 times as likely to be rated as providing good – quality information compared to human doctors.

Since requiring AI in my classes, I no longer see badly written work at all. And as my students learn, if you work interactively with the AI, the outcome doesn’t feel generic, it feels like a human did it.

The implications of having AI write our first drafts (even if we do the work ourselves, which is not a given) are huge. One consequence is that we could lose our creativity and originality. When we use AI to generate our first drafts, we tend to anchor on the first idea that the machine produces, which influences our future work. Even if we rewrite the drafts completely, they will still be tainted by the AI’s influence. We will not be able to explore different perspectives and alternatives, which could lead to better solutions and insights. Another consequence is that we could reduce the quality and depth of our thinking and reasoning. When we use AI to generate our first drafts, we don’t have to think as hard or as deeply about what we write. We rely on the machine to do the hard work of analysis and synthesis, and we don’t engage in critical and reflective thinking ourselves. We also miss the opportunity to learn from our mistakes and feedback, and the chance to develop our own style. There is already evidence that this is going to be a problem. The MIT study mentioned earlier found that ChatGPT mostly serves as a substitute for human effort, not a complement to our skills. In fact, the vast majority of participants didn’t even bother editing the AI’s output. This is a problem I see repeatedly when people first use AI: they just paste in the exact question they are asked and let the AI answer it. A lot of work is time-consuming by design. In a world in which the AI gives an instant, pretty good, near universally accessible shortcut, we’ll soon face a crisis of meaning in creative work of all kinds.

But AI will make a lot of previously useful tasks meaningless. It will also remove the facade that previously disguised meaningless tasks. We may not have always known if our work mattered in the bigger picture, but in most organizations, the people in your part of the organizational structure felt it did. With AI – generated work sent to other AIs to assess, that sense of meaning disappears.

Each study has concluded the same thing: almost all of our jobs will overlap with the capabilities of AI. As I’ve alluded to previously, the shape of this AI revolution in the workplace looks very different from every previous automation revolution, which typically started with the most repetitive and dangerous jobs.

Only 36 job categories2 out of 1,016 had no overlap with AI.

Boston Consulting Group (BCG),

nearly eight hundred consultants who took part in the experiments.

The group working with the AI did significantly better than the consultants who were not.

The AI – powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.

It is a problem I see repeatedly when people first use AI: they just paste in the exact question they are asked and let the AI answer it. There is danger in working with AIs — danger that we make ourselves redundant, of course, but also danger that we trust AIs for work too much.

He found that recruiters who used high – quality AI became lazy, careless, and less skilled in their own judgment. They missed out on some brilliant applicants5 and made worse decisions than recruiters who used low – quality AI or no AI at all.

trade – off between AI quality and human effort. When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”

With that knowledge, we need to be conscious about the tasks we are giving AI, so as to take advantage of its strengths and our weaknesses.

Using AI as a co – intelligence, as I did while writing, is where AI is the most valuable.

While data is hard to come by, I have already met many people at companies where AI is banned who are using this workaround — and those are just the ones willing to admit it! This type of shadow IT use is common in organizations, but it incentivizes workers to keep quiet about their innovations and productivity gains.

the usual ways in which organizations try to respond to new technologies don’t work well for AI. They are all far too centralized and far too slow.

There are many reasons for companies to not turn efficiency gains into head – count reduction or cost reduction. Companies that figure out how to use their newly productive workforce should be able to dominate any company that tries to keep their post – AI output the same as their pre – AI output, just with fewer people. And companies that commit to maintaining their workforce will likely have employees as partners who are happy to teach others about the uses of AI at work, rather than scared workers who hide their AI for fear of being replaced.

Without a fundamental restructuring of how organizations work, the benefits of AI will never be recognized.

We often take for granted the systems we use to structure and coordinate work in our organizations. We assume they are natural ways of getting things done. But in reality, they are historical artifacts, shaped by the technological and social conditions of their times.

By acting as a co – intelligence managing work, or at least helping managers manage work, the enhanced capabilities of LLMs could radically change the experience of work. A single AI can talk to hundreds of workers, offering advice and monitoring performance. They could mentor, or they could manipulate. They could guide decisions in ways that are subtle or overt.

Boredom is not just boring ; it is dangerous in its own way. In an ideal world, managers would spend time trying to end the useless and repetitive work that leads to boredom, and to adjust work to focus on the more engaging tasks. Despite years of management advice, however, most official rituals, forms, and requirements persist long past their usefulness. If humans couldn’t end this tedious work, maybe machines can.

As we have seen, it seems very likely that AI will take over human tasks. If we take advantage of all that AI has to offer, this could be a good thing. Boring tasks, or tasks that we are not good at, can be outsourced to AI, leaving good and high – value tasks to us, or at least to AI – human Cyborg teams. This fits into historical patterns of automation, where the bundles of tasks that make up jobs change as new technologies are developed.

Stock photography, a $3 billion per year market, is likely to largely disappear as AIs, ironically trained on these very images, can easily produce customized images. Or consider the $ 110 billion a year call – center industry, which will reckon with the impact of fine-tuned AIs handling ever more tasks that were once done by humans, acting like a phone tree service that actually works.

ever held by women — telephone operators. By the 1920s, 15 percent of all American women had worked as operators, and AT & T was the largest employer in the United States. AT & T decided to remove the old – school telephone operators and replace them with much cheaper direct dialling. Operator jobs dropped rapidly by 50 to 80 percent. As might be expected, the job market overall adjusted quickly, as young women found other roles, like secretarial positions, that offered similar or better pay. But the women with the most experience as operators took a larger hit to their long – term earnings, as their tenure in a now extinct job did not translate to other fields. So, while jobs usually adjust to automation, they do not always, at least not for everyone. Of course, there are also reasons why AI might be different from other technological waves. It is the first wave of automation that broadly affects the highest – paid professional workers. Plus, AI adoption is happening much more quickly, and much more broadly, than previous waves of technology.

In study after study, the people who get the biggest boost17 from AI are those with the lowest initial ability — it turns poor performers into good performers.

Roy Amara, says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The future is remarkably unclear in the long term

“ The 2 Sigma Problem. ” 1 In this paper, Bloom reported that the average student tutored one – to – one performed two standard deviations better than students educated in a conventional classroom environment. This means that the average tutored student scored higher than 98 percent of the students in the control group (though not all studies of tutoring have found as large an impact).

Though we are certainly at an inflection point where AI will reshape how we teach and learn, both in schools, and after we leave them. At the same time, the ways in which AI will impact education in the near future are likely to be counterintuitive. They won’t replace teachers but will make classrooms more necessary. They may force us to learn more facts, not fewer, in school. And they will destroy the way we teach before they improve it.

One study of eleven years of college courses found that when students did their homework2 in 2008, it improved test grades for 86 percent of them, but it helped only 45 percent of students in 2017. Why? Because over half of students were looking up homework answers on the internet by 2017, so they never got the benefits of homework. And that isn’t all. By 2017, 15 percent of students had paid someone3 to do an assignment, usually through essay mills online.

Additionally, and most important: there is no way to detect5 whether or not a piece of text is AI – generated. A couple of rounds of prompting remove the ability of any detection system to identify AI writing. Even worse, detectors have high false – positive rates, 6 accusing people (and especially non-native English speakers) of using AI when they are not. You cannot ask an AI to detect AI writing, either — it will just make up an answer. Unless you are doing in – class assignments, there is no accurate way of detecting whether work is human – created.

Just as calculators did not replace the need for learning maths, AI will not replace the need for learning to write and think critically. It may take a while to sort it out, but we will do so. In fact, we must do so — it’s too late to put the genie back in the bottle.

cheating will remain undetectable and widespread. AI tutoring will likely become excellent, but not a replacement for school. Classrooms provide so much more: opportunities to practice learned skills, collaborate on problem – solving, socialize, and receive support from instructors. School will continue to add value, even with excellent AI tutors.

We have already been finding that AI is very good at assisting instructors to prepare more engaging, organized lectures and make the traditional passive lecture far more active. In the longer term, however, the lecture is in danger. Too many involve passive learning, where students simply listen and take notes without engaging in active problem – solving or critical thinking. Moreover, the one – size – fits – all approach of lectures doesn’t account for individual differences and abilities, leading to some students falling behind while others become disengaged due to a lack of challenge.

Multiple studies support the growing consensus that active learning is one of the most effective approaches to education, but it can take effort to develop active learning strategies, and students still need proper initial instruction.

We stand on the cusp of an era when AI changes how we educate — empowering teachers and students and reshaping the learning experience — and, hopefully, achieve that two sigma improvement for all. The only question is whether we steer this shift in a way that lives up to the ideals of expanding opportunity for everyone and nurturing human potential.

The biggest danger to our educational system posed by AI is not its destruction of homework, but rather its undermining of the hidden system of apprenticeship that comes after formal education. For most professional workers, leaving school for the workforce marks the beginning of their practical education, not the end. Education is followed by years of on – the – job training, which can range from organized training programs to a few years of late nights and angry bosses yelling at you about menial tasks. This system was not designed in a centralized way as parts of our educational system were, but it is critical to the way we actually learn to do real work.

Only by learning from more experienced experts in a field, and trying and failing under their tutelage, do amateurs become experts.

Even as experts become the only people who can effectively check the work of ever more capable AIs, we are in danger of stopping the pipeline that creates experts.

This is the paradox of knowledge acquisition in the age of AI: we may think we don’t need to work to memorize and amass basic skills, or build up a storehouse of fundamental knowledge — after all, this is what the AI is good at. Foundational skills, always tedious to learn, seem to be obsolete. And they might be, if there was a shortcut to being an expert. But the path to expertise requires a grounding in facts.

Learning any skill and mastering any domain requires rote memorization, careful skills building, and purposeful practice, and the AI (and future generations of AI) will undoubtedly be better than a novice at many early skills.

The issue is that in order to learn to think critically, problem – solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we need subject matter expertise.

The closer we move to a world of Cyborgs and Centaurs in which the AI augments our work, the more we need to maintain and nurture human expertise. We need expert humans in the loop.

AI may be able to help directly address these issues, creating a better training system than we have today.

I have been making the argument that expertise is going to matter more than before because experts may be able to get the most out of AI coworkers and are likely to be able to fact – check and correct AI errors. But even with deliberate practice, not everyone can become an expert in everything. Talent also plays a role.

Silicon Valley tells stories of the “10x engineer.” That is, a highly productive software engineer is up to 10 times better than an average one. This is actually a topic that has been studied repeatedly, although most of those studies are quite old.