Magical Action of Q-Star : an AI which threatens Humanity

After active and successful use of ChatGPT, it was difficult for people to believe that an Artificial Intelligence software can be this powerful.

The company that made it, Open AI, and the CEO of this company, Sam Altman, has emerged like the new faces of the AI revolution. Our belief is that all repetitive human work, that will all be done in the next couple of decades, better, cheaper, faster by AI. But a few days ago, some strange things happened in this company.

All of a sudden, the board of directors of the company fired Sam Altman. The employees of the company were shocked to hear this news. They were threatened that many people would resign from their jobs.

In the next four days, the CEOs’ positions were changed three times in Open AI. But the most shocking thing is that behind all this controversy, the reason is a mysterious AI. QSTAR You heard it right.

Ad

A few days before Sam Altman was fired, the researchers working in the company wrote a letter to the board of directors. A warning was given in this letter. In the letter, it was written that the company has invented a very powerful AI.

An AI that can put humanity in danger. An AI that can not only solve very complex mathematical and scientific problems, but can also predict future events to some extent. The company has internally named this AI as QSTAR.

Ad

Let’s understand this mystery in depth in this article. Altman once chose Microsoft. And he chose Microsoft again.

Yet another twist in the Sam Altman saga at Open AI. This year, Tom Cruise’s new Mission Impossible film Dead Reckoning Part 1 was released. If you remember, the main villain in this film was not a human.

Ad

It was a software of Artificial Intelligence. An entity. It knows your every secret.

The film named this AI as Entity. And this AI was shown to be so powerful that it was present at all times. And it could manipulate humans for its own benefit.

Ad

Mathematically, with the help of its mass surveillance, it could also predict the future. And predicting the future doesn’t mean telling what will happen in the next 10-15 years. But in any particular situation, in the next day or in the next week, telling the future outcome of that particular situation.

This AI was not only good in 2-3 tasks. But in almost every task, it could do better than humans. This type of Artificial Intelligence is given a name in real life.

Ad

AGI. Artificial General Intelligence.

A computer system that can do any job or any task that a human does but only better. Today, the famous AI softwares like ChatGPT, Language Learning Models, or Generative AI tools like MidJourney, are put in the category of Weak AI. That is, they are weak AI softwares.

They are only good in specific tasks. The tasks for which they are made, they do them very well, better than humans. But only in those tasks, they are specialised.

If a strong AI like AGI exists, then it can do better than humans in a lot of different things at once. But at this point of time, no strong AI exists in the world. The year of OpenAI was established in 2015 as a non-profit.

And behind making this company, there was only one goal. To make such an Artificial General Intelligence, to make AGI, which will benefit humanity. If you go to this company’s website, they have clearly stated their mission.

To ensure that Artificial General Intelligence benefits all of humanity. Behind making this company, there were many big tech entrepreneurs. Among them, Sam Altman and Elon Musk.

In total, there were 10 co-founding members. Among them, the current Chief Scientist, Ilya Sutskever, and the President of OpenAI, Greg Brockman, were also included. Remember these two names here because they play an important role in our story.

All the co-founders together, pledged a total of $1 billion to OpenAI. In 2019, 4 years after the company’s formation, Sam Altman became the CEO of OpenAI. And around 4 years later, on 17th November, 2023, the company’s Board of Directors suddenly fired Sam one day.

In such big companies, if you don’t know, there is a Board of Directors. In most cases, they have the power to fire and hire the CEO. According to the rules, it is normal for the Board of Directors to have so much power.

There were 6 members in OpenAI’s Board. I have already told you the names of the 3. One was Sam himself, the second was Ilya, the third was Greg, and the remaining 3 were independent executives. One was Adam, the CEO of Quora, one was Tech Entrepreneur Tasha Macaulay, and one was Helen Toner.

To reach any decision, the majority needs to vote. And obviously, Sam wouldn’t fire himself. So only 5 people were left.

The other thing here is that along with firing Sam, the Board of Directors also fired Greg. So even Greg wouldn’t have voted against Sam. Only 4 people were left.

Ilya, Adam, Tasha and Helen. These were the 4 people who informed Sam through a Google Meet call that they were being fired. What was the reason behind firing them? Not much was told publicly.

The Board said that what Sam used to talk to us, he didn’t do it openly. His communication wasn’t clear. Indicating that he was hiding some things from us.

After being fired from the company, neither Sam nor Greg gave any statement as to why they were fired. They just went on Twitter and told people that they feel very disappointed about this. This decision was shocking in the world of tech.

How is it possible that such a powerful and influential CEO was fired by 4 Board members all of a sudden? What was the real reason that was being hidden from the world? I told you in the beginning that OpenAI was founded as a non-profit company. This is very unique and very important. Because all the other popular companies are all for-profit.

Facebook, Meta, Google, Microsoft, Apple all these companies make their products, sell their services to earn money. And their main goal here is to earn money. But the main goal of OpenAI was to make an AGI for the benefit of humans.

This was more of a research facility. It was even written in the OpenAI Charter that the duty of the company is towards humanity. Neither towards the investors nor towards the employees.

But this non-profit model unfortunately didn’t last for long. In 2019, the year when Sam Altman became the CEO of OpenAI, the same year, OpenAI established a subsidiary company. This subsidiary company was named for-profit.

Its name is OpenAI Global LLC. And this subsidiary company works on the model of capped profits. Capped means a profit will be earned This limit is 100 times the investment.

So the investors who invest their money in this for-profit company, they can get a maximum of 100 times return on their investment. And if this company earns more profit than that, then all that profit will go to this non-profit main company. OpenAI was actually started and it was meant to be open source.

I named it OpenAI after open source. It is in fact closed source. It should be renamed Super Closed Source for Maximum Profit AI.

In 2019, this for-profit subsidiary company got its first and biggest funding from Microsoft. Microsoft invests 1 billion dollars in it. But in the next 4 years, as the popularity of this company increased, in total, of 13 billion dollars.

Today, it is said that Microsoft has 49% stake in OpenAI for-profit subsidiary. Let’s talk about Microsoft later. But before that, it is important to understand that when this for-profit subsidiary will be made, it is also said that the control of this company will still be with non-profit OpenAI main company.

But with all these conflicting arguments, a very big question arose. How much work should be done for-profit and how much work should be done non-profit? And when this AGI will be invented, to what extent should this AGI be kept for-profit and to what extent should it be kept non-profit? If everything becomes for-profit, then it can have a very harmful impact on the world. The best example of this is Facebook.

Facebook’s algorithms optimized everything only for their profit. Whether people have mental health issues, people have addiction, people spread hate speech on the platform, even if there are riots in the world because of this. Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts.

This is according to leaked documents obtained by the Associated Press. Exposing all these things, two years ago, a Facebook paper was released which I mentioned in this article. The same fear some people have regarding AI.

Before firing Sam, some researchers wrote a letter to the board of Open AI according to the It was written that the Q-Star AI that they are developing can be a big step in reaching And they are the same potential of Q-Star. What exactly with the help of this Q-Star? Only the researchers and employees know this. But if we talk conceptually, then Q-Star is also a concept in AI in the field of reinforcement learning.

What is reinforcement learning? It is basically a way to train AI. Human feedback should be given so that it keeps learning, keeps understanding its environment better and gradually takes better decisions.

This function is written like this. Q bracket S comma A. S means state and A means action. Q star is a function that represents the most optimal point.

Let me explain with an example. Suppose you are playing chess. Your soldier is sitting on a square or a position.

The place where he is standing is called state S. The next move you want to play in chess is called action A. Its Q-value function will predict all possible scenarios. When you play a move, what can happen By analysing it, the best possible move will be determined according to your position. The best and the most optimal move is called Q star in the language of chess.

In this game of chess, which is the best move in any situation for you to move? I just took an example of a chess board here. But think about it if this can be done for anything in the world. For example, if you are driving and AI can predict the speeds of around you, can see their driving styles, can do an analysis on how all those cars will potentially move in the next 1-2 seconds and can give you the best possible driving instruction.

Or before any election, Q star AI can do an analysis on every possible scenario and can predict the election results. If such a capability is in this mystery of Open AI, in Q star AI, then it can predict the future to a great extent. The same thing that was shown in the film of Mission Impossible can be done.

To predict the thought patterns of humans. To analyse all possible decisions in any movement. To be able to accurately predict how a person will take decisions in any situation.

From business deals to political moves, everyone can have the capability to influence this Q star. These things humans can do to some extent but AGI can go to the next level Because humans have their own biases. Humans are emotional.

They take decisions of those emotions. But AI will make its predictions here only on the basis In a chess game, after analysing all possible moves, you will be told the most optimal move. This AI will be able to give the most accurate prediction.

Today’s chat GPT is able to do writing and language translation very well by predicting the next words. But if Q value learning is added to this AI, then we will get the answers to in the most optimal way. We will get the most optimal answer to any question.

This is just a guess that we can make about this mysterious Q star AI. But in reality, only the people working in Open AI know how much progress has been made here. And how close they are to AGI.

Those people who rang the bell for Q star, their fear was that we are going in the wrong direction. The loss that will happen to humanity is not much compared to the benefit. Even though the board of directors gave a vague reason to fire Sam, the actual reason was two different ideologies.

In Open AI, there is one ideology of for-profit. Which represents commercialization. And the other is the ideology of the non-profit group who are more afraid of this technology.

Which of the two sides is right here? This is a debate within itself. Those who favour the for-profit side say that if we want to develop better technology, we will need a lot of money. The investor will need money.

Without which it will not be possible at all. And those who are on the non-profit side say that the ideology can destroy our original ideology and can be a big threat to humanity. A few days before this controversy started, Microsoft’s President Brad Smith said at a conference on 10th November that who do you trust more? If your technology comes from a non-profit company or from a for-profit company which is being controlled by only one person.

Here, he was indirectly pointing at Mark Zuckerberg. But in Open AI, this balance between both sides was for a long time. In February 2023, the first paid version of ChatGPT was launched, ChatGPT+.

Next month, on 1st March, an API, i.e. Application Programming Interface was launched so that other companies can integrate ChatGPT. After this, on 14th March, GPT 4 was launched. The employees said that the hyper-commercialization seen in the company was Open AI.

Because of this, a gap was created between these two sides. After the release of ChatGPT, a clear path of revenue and profit We couldn’t remain as an idealistic research lab. We had our customers and it was to serve them.

In October 2023, Open AI launches its powerful image generator DALL-E3. It is combined with the paid versions of After this, on 6th November, the company’s first developer conference is held. Welcome to our first ever Open AI Dev Day.

Today, we’ve got about 2 million developers building on our API for a wide variety of use cases doing amazing stuff. In this conference, Sam gives a presentation in front of the Just like an Apple or Google presentation. It is said that you can also make a custom-built model of They are called GPTs.

Not just one chat-GPT, but there are a lot of customized GPTs. We’ll talk about these in detail in the course update. But what was happening here as the commercialization was increasing? One side was Sam and President Greg who were the commercialization.

But the other side was Chief Scientist Ilya who was feeling very uncomfortable with this. Ilya was AI safety. He once told his employees that he is afraid that AGI systems will treat humans in the same way as humans treat animals According to AGI is not far away.

In the coming years, AGI very soon and we all have to be prepared for it. More and more people see what AI can do and where it is headed towards. Then it will become clear how much trepidation is appropriate.

In July, Open AI announces that a new super alignment team will be formed which will AI safety techniques. The team’s leader is Ilya. The company decides that 20% of the computer chips that the company has will be dedicated to this work.

Only and only focusing on safety related to AI. By August-September, it was quite clear that two different groups were divided in the which were working in opposite directions. Sam was talking about which new big thing will be launched next.

After GPT-4, GPT-5. On the other hand, Ilya was focusing on how to AI safety in the What precautions should we take? Sam was focusing on how to billions of dollars so that our speed can increase. The result of all this was that the other four board of directors of Open AI were on Ilya’s side.

When they got the letter that a powerful Q-Star was invented, they understood that the for-profit section should be removed from After firing the board announced that Meera Murati will be CEO of She will the interim CEO. This was on 17th November. But till the morning of 19th November, the employees of the company started making noise.

Most of the employees of the company were actually on Sam’s side. Microsoft was also in a weird position because they had a 49% stake of for-profit subsidiary. They also didn’t want the company to fall apart.

Microsoft was putting pressure on the board to make Sam the CEO of OpenAI. The next day, OpenAI announced that the old Twitch CEO Amit Sher has become the new CEO of OpenAI. On the same day, Microsoft’s CEO Satya Nadella informed that his company will make a new advanced AI research team and Sam and Greg will lead that team.

Because of Satya Nadella’s move, Microsoft’s stock price reached a record high. But the question arises that what will OpenAI’s CEO Amit Sher do? Because it was that 743 770 employees of OpenAI wrote a letter saying that if Sam and Greg don’t come back to their company, they will resign. More than 90% employees threaten that they will leave the The first person to sign this letter is Meera Murati.

And then we see that 90% employees left the company. Then comes the biggest twist of Ilya himself signs and tweets this That is, Ilya also realizes that if the company doesn’t leave, what safety precautions will be taken. The result of this is that three co-founders of OpenAI come to one side.

Sam, Ilya and Greg. The other three independent board members keep their words. But then the new CEO of the company Ahmed also threatens to resign.

He says that if I am not told why Sam was removed from the company, I will also resign. After all this, it was very clear that there was option left. So on November 21, Sam is called back to the company.

And again Sam becomes OpenAI. In the end, these three board members were left on one side and on the other side were the CEO, co-founders of the company and two new members are added to the board. Brett Taylor, who was the former co-CEO of Salesforce and Larry Summers, who was the former Secretary of the Treasury.

This new board has been given the first task to appoint a big board with nine people. Sam then tweets that I love OpenAI. Whatever I have done in the last few days is to keep this team together, to move my mission.

When I said that I will join the new Microsoft team, I thought that would be my path. But now I am happy and our new board is also by Satya. From the perspective of OpenAI, almost all the employees of OpenAI are the CEO and co-founders.

They could the same work. But now OpenAI has already a good partnership with Microsoft. So, Microsoft benefits Satya Nadella tweets that we are encouraged by the changes to the OpenAI board.

We believe that this is the first step for stable, well-informed and effective governance. Here, it is also expected that Satya Nadella will become a board member.  This company is completely safe for now. But the question here is that in the future, OpenAI will turn for-profit or will maintain its non-profit values.

And what impact will it have on the development Only time will tell. But one thing for sure can be said that AI is not going anywhere.  Artificial Intelligence has become our world.

And the sooner you learn to use it, the easier it will be for you to stay ahead in this changing world. The course link is in the description below.

Ad

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top