AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
South Korea’s LG Electronics has introduced its latest AI innovation, the Exaone 4.0, a hybrid-reasoning AI system designed to integrate a Large Language Model (LLM) with a reasoning AI engine. This model is the first of its kind in South Korea, combining the strengths of large language models and reasoning AI to excel in natural language processing, generation, and knowledge-based tasks. The Exaone 4.0 is now available immediately via FriendliAI's platform, allowing developers to leverage its capabilities for a wide range of applications.
The Exaone 4.0 has been developed by LG AI Research, the AI arm of LG Group. The model's hybrid reasoning approach enables it to generate and test hypotheses, making it a powerful tool for various industries. According to LG, the Exaone 4.0 outperforms other AI models, including Alibaba's Qwen 3, in benchmarks related to mathematics, science, and coding. This achievement underscores the model's advanced capabilities and its potential to revolutionize the AI landscape.
The launch of Exaone 4.0 positions LG as a key player in the global AI race, demonstrating its commitment to innovation and technological advancement. The company's investment in AI research and development has resulted in a model that is not only innovative but also practical, with real-world applications. The availability of Exaone 4.0 via FriendliAI's platform also demonstrates LG's commitment to making its technology accessible to developers and businesses around the world.
LG AI Research was first established in 2020 to accelerate the practical use of generative AI. Not long after, the group launched its first model, Exaone 1.0, in 2021. Then, in March 2025, the company launched South Korea’s first reasoning artificial intelligence (AI) model, Exaone Deep, with its main model, Exaone Deep-32B. The model includes 32 billion parameters to aid in AI learning and reasoning, and experts claim it performs on a level comparable to Deepseek’s R1. Deepseek’s RI model has 671 billion parameters, about 95% more than the Exaone Deep-32-B. While a higher parameter count often correlates with improved performance, it also demands more computational resources. As a result, more companies are shifting their focus toward optimizing models to deliver similar results with fewer parameters.
Per several performance tests, the Exaone Deep model surpasses Deepseek’s R1 and Alibaba’s QwQ-32B models in most. In the 2024 US Mathematical Olympiad, Exaone Deep-32B scored 90, beating Deepseek and Alibaba models at 86.7. The South Korean model also led in South Korea’s 2025 CSAT math section with an impressive 94.5. Not to mention, it outperformed the QwQ-32B version in doctoral-level science, earning a score of 66.1 against 63.3. Nevertheless, Exaone Deep still lags in coding and language prompts compared to Deepseek’s Alibaba models.
According to LG, the new Exaone 4.0 has also performed better in more performance tests than other AI models from the United States, China, and France. So far, very few companies have developed comparable hybrid models. The established hybrid reasoning systems in the market include Anthropic’s Claude and Alibaba’s Qwen models. That aside, the new Exaone 4.0 will be available in Seoul on July 22.
Lee Jin Sik, head of the EXAONE Lab at LG AI Research, commented, “We will continue our R&D efforts to establish EXAONE as Korea’s leading frontier AI model and to demonstrate its competitiveness in the global market.”

Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet