AI Inflection Point: AI Agents, Value Creation Process & Business Model

AI is rapidly evolving from simple chatbots to autonomous agents, unlocking unprecedented productivity and massive computing demands. This shift is disrupting traditional value creation and business models.

AI is currently at an inflection point, and within the next 2 months, things will get even scarier. With all the developments recently, it should reflect in companies’ financials soon. Some will be negatively impacted, and some are taking the opportunity of the century.

If you are investing in US or China tech, or if you just DCA-ing into S&P500, this is for you to keep up with what is happening.

3 generations of AI in 3 years

Since ChatGPT 3.5 launched in November 2022, we have experienced 3 generations of AI improvement in this short 3 years span:

1. 1st Generation AI Chatbot: The first generation of AI was very good and recognizing patterns, finding relationship and synthesize what already exists. Basically the underlying AI model will generate tokens to directly give answers based on a user's prompt.

  • Upside: Intern-level work

  • Downside 1: We need huge amount of human-generated quality data for training. The bigger the training data, the better the model.

  • Downside 2: Path-dependent, if the first token generated has “wrong info” (aka bad output/hallucinate), the token with “wrong info” will be used by AI to generate the next token and repeat, compounding the mistakes. In short, Gen 1 AI heavily relied on human to manage mistakes and verify outputs.

2. 2nd Generation AI Reasoning: The second generation of AI is AI that can think. It shifted from pattern recognition to chain-of-thought. Basically the underlying AI model will generate massive amount of tokens to generate 1st level of answers based on a user's prompt, then query back to AI to “think” and consider alternative, verify the correct path before generating the final answers for user.

  • Upside 1: Junior-level work

  • Upside 2: Because the output generated by Reasoning AI Model is good enough, it can be used to further train AI model with synthetic data, significantly reduce reliance on human-generated high quality data. This creates a self-feeding loop for AI model improvement.

  • Downside 1: Because AI need to generate more token to “think”, AI compute for both training and inferencing increase significantly. More synthetic data = more compute needed for training. More “thinking” = more compute needed for inferencing.

  • Downside 2: Since more token is needed to “think”, the memory needed by AI to temporarily store “information” as context increase significantly. This is what caused memory shortage for AI compute.

Paid subscribers get to read the full analysis on the 3rd generation of AI (now), value creation process and who will capture the value, and business models that will win in future AI world.

Be a paid subscriber to read the rest of the article!

You can unlock paid subscriber-only contents & perks such as exclusive group chat and offline community meetups when you join the Community Subscription.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • Access to DoitDuit portfolio
  • • Offline community meetup every month
  • • 1-2 exclusive content sent to you via email every week
  • • Groupchat exclusive for paid subscribers on Telegram

Reply

or to participate.