Browsing: openAI

Ask 50 people you meet what tech has dominated the 2020s so far and many would say “AI”. Since late 2022, with the advent of OpenAI’s ChatGPT, people have been positioning AI as one of the most transformative technologies that could offer opportunities for efficiency, innovation and growth. That being said, AI is not sans its risks, just like many powerful technologies. But, the applications of AI could be so ground-breaking, that when AI systems fail or make errors, the consequences could be significant and costly. Who you gonna blame? The robots? So, who you gonna call? The insurance folks?

Here’s how insurance, traditionally, works. You get a safety net for unexpected events in life. It would protect you from financial losses that could happen and this could be due to accidents, health issues, disasters or other unforeseen circumstances.

So, you pay a regular amount of money called premiums paid monthly or annually. You buy insurance from a company that collects premiums. Many folks pay premiums into a big pool, some of whom would face unfortunate events, for which they would need financial help. At that point, they could make a claim to the insurance company, which the insurance company assesses and if they deem it to be a valid claim, the person would be provided with financial assistance to help cover costs related to the incident. The insurance policy could have certain limits, which is the minimum amount the insurance company could pay and there could be a deductible, which is an initial amount you would need to cover before the company starts paying.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here

In late 2022, OpenAI's ChatGPT was released to the public. And while people threw bouquets at the power of generative AI, there were a couple of brickbats as well. Many were worried about potential copyright infringement and privacy violations. And then, the lawsuits began.

There was a lawsuit filed that alleged OpenAI copied text from books without getting consent, crediting the copyright holder or even compensating them. Another lawsuit claimed that OpenAI's models collect people's personal information illegally. It was claimed that ChatGPT could accurately summarize books, which may mean that the LLM has read the books.

And there have been accusations that OpenAI is gathering people's images, music preferences, locations, financial details and more by being integrated into platforms, like Spotify, Snapchat, Slack and Microsoft Teams. And then, the New York Times sued OpenAI for copyright infringement, becoming the first major American media organization to do so, contending that millions of articles published by NYT were used to train chatbots to replace the outlet as a source of information. Comedian Sarah Silverman, also, joined the lawsuits accusing OpenAI of having ingested her memoir as a training text for AI programmes. Authors, like John Grisham, George RR Martin, Michael Connelly, Jodi Picoult and others are some of the authors who have sued OpenAI.

So, why the OpenAI dislike? Are people just getting on the bandwagon of lawsuits? Why can't everyone just get along?

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here

Remember November 2023? That seemed to be a tumultuous time for OpenAI. Co-founder Samuel Altman was abruptly fired over Google Meet, joined Microsoft and, then, returned to OpenAI again. Then, he fired most of the OpenAI Board that had fired him. But, just before Altman was fired, something happened.

A group of staff researchers were said to have sent a letter to the OpenAI Board. The letter was said to be about warning it about a new AI algorithm that could pose a threat to humanity. There was said to be a mysterious endeavour called Project Q* (pronounced Q-Star.) Some members believed that Project Q* could be a significant breakthrough in the pursuit of AGI, which is Artificial General Intelligence. This is a system that isn't good at one specific thing, but one that could do a wide range of things better than people. Some systems are good at something, but not everything. A smartphone could be great at understanding voice commands. But, it may not know how to learn a new language without specific programming.

AGI could create machines that aren't narrowly focused, but could understand, learn and do many things. Much like a human being. It could be a system that could learn things on its own, understand people's needs better over time and adapt to situations without needing specific instructions for each scenario. The idea of AGI is said to be building machines that have the kind of intelligence people have.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here