Articles


2024-05-09

[News] Rising Chinese GPU Contender Emerges? Achieving Local GPU Production in Three Years, Claims “Outperforming AMD”

The latest challenger has emerged in the battle for dominance in China’s GPU and graphics card market. China’s LinJoWing has unveiled its self-developed second-generation graphics processing chip, the GP201, reportedly boasting performance metrics surpassing that of AMD’s E8860 embedded graphics card.

According to a report from global media outlet Tom’s Hardware, LinJoWing, despite being only three years old, has demonstrated with its GP201 GPU performance comparable to AMD’s E8860 integrated graphics card from a decade ago. While the GPU is already in production and available in China, it has yet to surface on American shopping websites.

As per a report from Chinese media outlet IT Home, the GP201 outperforms the AMD E8860 embedded graphics card in various aspects such as 3D performance, 2D polygon rendering, ellipse rendering, pixel and image shifting, window rendering, and support for the domestic OpenCL library platform.

Additionally, LinJoWing’s GP201 GPU supports multiple Chinese-made processors and operating systems, with single-precision floating-point computing power reaching 1.2 Tflops. It supports 4K 60Hz display and H.265 decoding, with a maximum power consumption of 30W. Currently, five models of the GPU have been released in full-height, half-height, MXM, and other forms.

Tom’s Hardware believes that the performance of the GP201 is actually unimpressive. NVIDIA’s entry-level product, the GT 1030, released in 2017, matches the GP201 in terms of clock speed, TFLOPS, and power consumption, with eBay prices generally below $50. The GT 1030 benefits from mature NVIDIA drivers, making it difficult for LinJoWing to reach this level. However, LinJoWing’s ability to enter production after only three years of establishment gives it a competitive edge over other Chinese graphics cards.

This year, LinJoWing also surpassed its competitor Loongson in the low-end GPU market. However, challenging its biggest competitor, Moore Threads, will require further effort. Currently, Moore Threads’ flagship GPU’s specifications can rival NVIDIA’s RTX 3060 Ti.

Read more

(Photo credit: AMD)

Please note that this article cites information from Tom’s Hardware and IT Home.

2024-05-09

[COMPUTEX 2024] The Rise of Generative AI Sparks Innovation across Industries, with Taiwan-based Companies Leading as Essential Partners in the Global Supply Chain

“The Dawn of Generative AI Has Come!” This new chapter in the course of human technological evolution was first introduced by NVIDIA’s founder, Jensen Huang. Qualcomm’s CEO, Cristiano Amon, also shares this optimism regarding generative AI. Amon believes this technology is rapidly evolving and being adopted for applications such as mobile devices. It is expected to have the potential to radically transform the landscape of the smartphone industry. Similarly, Intel has declared the arrival of the “AI PC” era, signaling a major shift in computing-related technologies and applications.

COMPUTEX 2024, the global showcase of AIoT and startup innovations, will run from June 4th to June 7th. This year’s theme, ‘Connecting AI’, aligns perfectly with the article’s focus on the transformative power of Generative AI and Taiwan’s pivotal role in driving innovation across industries

This year, AI is transitioning from cloud computing to on-premise computing. Various “AI PCs” and “AI smartphones” are being introduced to the market, offering a wide range of selections. The current year of 2024 is even being referred to as the “Year of AI PC,” with brands such as Asus, Acer, Dell, Lenovo, and LG actively releasing new products to capture market share. With the rapid rise of AI PCs and AI smartphones, revolutionary changes are expected to occur in workplaces and people’s daily lives. Furthermore, the PC and smartphone industries are also expected to be reinvigorated with new sources of demand.

An AI PC refers to a laptop (notebook) computer capable of performing on-device AI computations. Its main difference from regular office or business laptops lies in its CPU, which includes an additional neural processing unit (NPU). Examples of AI CPUs include Intel’s Core Ultra series and AMD’s Ryzen 8040 series. Additionally, AI PCs come with more DRAM to meet the demands of AI computations, thereby supporting related applications like those involving machine learning.

Microsoft’s role is crucial in this context, as the company has introduced a conversational AI assistant called “Copilot” that aims to seamlessly integrate itself into various tasks, such as working on Microsoft Office documents, video calls, web browsing, and other forms of collaborative activities. With Copilot, it is now possible to add a direct shortcut button for AI on the keyboard, allowing PC users to experience a holistic collaborative relationship with AI.

In the future, various computer functions will continue to be optimized with AI. Moreover, barriers that existed for services such as ChatGPT, which still require an internet connection, are expected to disappear. Hence, AI-based apps on PCs could one day be run offline. Such a capability is also one of the most eagerly awaited features among PC users this year.

Surging Development of LLMs Worldwide Has Led to a Massive Increase in AI Server Shipments

AI-enabled applications are not limited to PCs and smartphones. For example, an increasing number of cloud companies have started providing services that leverage AI in various domains, including passenger cars, household appliances, home security devices, wearable devices, headphones, cameras, speakers, TVs, etc. These services often involve processing voice commands and answering questions using technologies like ChatGPT. Going forward, AI-enabled applications will become ubiquitous in people’s daily lives.

Not to be overlooked is the fact that, as countries and multinational enterprises continue to develop their large language models (LLMs), the demand for AI servers will increase and thus promote overall market growth. Furthermore, edge AI servers are expected to become a major growth contributor in the future as well. Small-sized businesses are more likely to use LLMs that are more modest in scale for various applications. Therefore, they are more likely to consider adopting lower-priced AI chips that also offer excellent cost-to-performance ratios.

TrendForce projects that shipments of AI servers, including models equipped with GPUs, FPGAs, and ASICs, will reach 1.655 million units in 2024, marking a growth of 40.2% compared with the 2023 figure. Furthermore, the share of AI servers in the overall server shipments for 2024 is projected to surpass 12%.

Regarding the development of AI chips in the current year of 2024, the focus is on the competition among the B100, MI300, and Gaudi series respectively released by NVIDIA, AMD, and Intel. Apart from these chips, another significant highlight of this year is the emergence of in-house designed chips or ASICs from cloud service providers.

In addition to AI chips, the development of AI on PCs and smartphones is certainly another major driving force behind the technology sector in 2024. In the market for CPUs used in AI PCs, Intel’s Core Ultra series and AMD’s Ryzen 8000G series are expected to make a notable impact. The Snapdragon X Elite from Qualcomm has also garnered significant attention as it could potentially alter the competitive landscape in the near future.

Turning to the market for SoCs used in AI smartphones, the fierce competition between Qualcomm’s Snapdragon 8 Gen 3 and MediaTek’s Dimensity 9300 series is a key indicator. Another development that warrants attention is the adoption of AI chips in automotive hardware, such as infotainment systems and advanced driver assistance systems. The automotive market is undoubtedly one of the main battlegrounds among chip suppliers this year.

The supply chain in Taiwan has played a crucial role in providing the hardware that supports the advancement of AI-related technologies. When looking at various sections of the AI ecosystem, including chip manufacturing as well as the supply chains for AI servers and AI PCs, Taiwan-based companies have been important contributors.

Taiwan-based Companies in the Supply Chain Stand Ready for the Coming Wave of AI-related Demand

In the upstream of the supply chain, semiconductor foundries and OSAT providers such as TSMC, UMC, and ASE have always been key suppliers. As for ODMs or OEMs, companies including Wistron, Wiwynn, Inventec, Quanta, Gigabyte, Supermicro, and Foxconn Industrial Internet have become major participants in the supply chains for AI servers and AI PCs.

In terms of components, AI servers are notable for having a power supply requirement that is 2-3 times greater than that of general-purpose servers. The power supply units used in AI servers are also required to offer specification and performance upgrades. Turning to AI PCs, they also have higher demands for both computing power and energy consumption. Therefore, advances in the technologies related to power supply units represent a significant indicator this year with respect to the overall development of AI servers and AI PCs. Companies including Delta Electronics, LITE-ON, AcBel Polytech, CWT, and Chicony are expected to make important contributions to the upgrading and provisioning of power supply units.

Also, as computing power increases, heat dissipation has become a pressing concern for hardware manufacturers looking to further enhance their products. The advancements in heat dissipation made by solution providers such as Sunon, Auras, AVC, and FCN during this year will be particularly noteworthy.

Besides the aforementioned companies, Taiwan is also home to numerous suppliers for other key components related to AI PCs. The table below lists notable component providers operating on the island.

With the advent of generative AI, the technology sector is poised for a boom across its various domains. From AI PCs to AI smartphones and a wide range of smart devices, this year’s market for electronics-related technologies is characterized by diversity and innovation. Taiwan’s supply chain plays a vital role in the development of AI PCs and AI servers, including chips, components, and entire computing systems. As competition intensifies in the realm of LLMs and AI chips, this entire market is expected to encounter more challenges and opportunities.

Join the AI grand event at Computex 2024, alongside CEOs from AMD, Intel, Qualcomm, and ARM. Discover more about this expo! https://bit.ly/44Gm0pK

(Photo credit: Qualcomm)

2024-05-09

[News] Gearing up for Backside Power Delivery: Heated Tech War Between TSMC, Intel, and Samsung

As Moore’s Law progresses, transistors are becoming smaller and denser, with more layers stacked on top of each other. This may require passing through 10 to 20 layers of stacking to provide power and data signals to the transistors below, leading to increasingly complex networks of interconnects and power lines. Simultaneously, as electrons transmit downward, IR drop phenomena occur, resulting in power loss.

Apart from power loss, the occupation of space by power supply lines is also a concern, which often occupies at least 20% of resources. Addressing the issue of signal network and power supply network resource contention to miniaturize components becomes a major challenge for chip designers. As per a report from TechNews, this has led the semiconductor industry to begin shifting power supply networks to the backside of chips.

  • TSMC’s Super Rail Technology Set to Revolutionize Chip Efficiency with A16 Process Node Debut in 2025

Leading semiconductor foundry TSMC recently unveiled its A16 process at a technical forum in North America.

This new node not only accommodates more transistors, enhancing computational efficiency, but also reduces energy consumption. Of particular interest is the integration of the Super PowerRail architecture and nanosheet transistors in the A16 chip, driving the development of data center processors that are faster and more efficient.

Notably, TSMC’s A16 employs a different chip wiring manner, with power wires delivering electricity to transistors located beneath rather than above them, known as backside power supply, facilitating the production of more efficient chips.

In fact, one of the methods to optimize processors is to alleviate IR drop, a phenomenon that reduces the voltage received by the transistors on the chip, consequently affecting performance. The A16 wiring is less prone to voltage drops, simplifying power distribution and allowing for tighter chip packaging, aiming to accommodate more transistors to enhance computational capabilities.

Additionally, TSMC’s A16 process technology directly connects the power transmission lines to the source and drain of the transistor, which improves chip efficiency.

Using the Super PowerRail in A16, TSMC achieves an 10% higher clock speed or a 15% to 20% decrease in power consumption at the same operating voltage (Vdd) compared to N2P. Moreover, the chip density is increased by up to 1.10 times, supporting data center products.

  • Intel’s PowerVia Set for Production on Intel 20A in 2024

Similar to TSMC’s Super PowerRail, Intel has also introduced its backside power delivery solution, PowerVia.

According to Intel, power lines typically occupy around 20% of the space on the chip surface, but PowerVia’s backside power delivery technology saves this space, allowing  more flexibility in the interconnect layers.

In addition, the Intel team previously created the Blue Sky Creek test chip to demonstrate the benefits of backside power delivery technology. Test results indicated that most areas of the chip achieved over 90% cell utilization, with a 30% platform voltage droop improvement, 6% frequency benefit, increased unit density, and potential cost reduction. The PowerVia test chip also exhibited excellent heat dissipation properties, aligning with expectations for higher power density as logic shrinks.

Furthermore, PowerVia is slated to be integrated into Intel Foundry Services (IFS), enabling faster achievement of product efficiency and performance enhancements for customer-designed chips.

According to official documentation from Intel, the tech giant plans to implement PowerVia on Intel 20A process technology along with the RibbonFET architecture for the full-surround gate transistor. Production readiness is expected in the first half of 2024, with initial steps being taken at the fabrication plant for future mass production of client ARL platforms.

  • Samsung Plans to Implement SF1.4 Process by 2027

In addition to leading the transition to GAA transistor technology, Samsung, another competitor of TSMC, is also wielding its Backside Power Delivery Network as a key weapon in the pursuit of advanced processes.

According to a previous report from Samsung, Jung Ki-tae Jung, Chief Technology Officer of Samsung’s foundry division, announced plans to apply the backside power delivery technology to the 1.4-nanometer process by 2027.

Reports from Korean media outlet theelec indicate that compared to traditional front-end power delivery networks, Samsung’s backside power delivery network successfully reduces wafer area consumption by 14.8%, providing more space on the chip to accommodate additional transistors, thereby enhancing overall performance.

Additionally, wiring length is reduced by 9.2%, aiding in resistance reduction to allow more current flow, leading to lower power consumption and improved power transmission conditions. Samsung Electronics representatives noted that the mass production timeline for semiconductor chips adopting backside power delivery technology may vary depending on customer schedules, and Samsung is currently investigating customer demand for the application of this technology.

Read more

Please note that this article cites information from TechNews and theelec.

2024-05-08

[News] Apple Unveiled M4 Chip for AI, Heralding a New Era of AI PC

On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.

Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.

  • Teardown of M4 Chip

Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.

Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.

On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.

The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.

Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.

As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.

Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.

The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.

Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.

Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.

  • 2024 Marks the First Year of AI PC Era

Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.

Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices.  Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.

According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.

Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.

Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.

TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.

The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.

Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Read more

(Photo credit: Apple)

Please note that this article cites information from WeChat account  DRAMeXchange.

2024-05-08

[News] U.S. Imposes Further Sanctions, Revoking Intel and Qualcomm’s License to Supply Chips to Huawei

The U.S. government has reportedly revoked the licenses of Intel and Qualcomm to supply semiconductor chips used in laptops and handsets to Huawei. According to Reuters citing sources, some companies received notices on May 7th, and the revocation of the licenses took immediate effect.

In April, Huawei unveiled its first AI-supported laptop, the MateBook X Pro, equipped with an Intel Core Ultra 9 processor. This announcement drew criticism from Republican lawmakers in the United States, who argued that the Commerce Department allowed Intel to export chips to Huawei. Notably, the sources cited in a report by Reuters on March 12th once stated that Intel’s competitor, AMD, had applied for a similar license to sell comparable chips in early 2021 but did not receive approval from the US Department of Commerce.

In response to the matter surrounding Intel and Huawei, the Commerce Department confirmed the revocation of some export licenses to Huawei but declined to provide further details. Still, revoking the licenses not only damages Huawei but may also impact U.S. suppliers with business relationships with the company.

According to a report from Bloomberg, Qualcomm, which obtained a license in 2020, has been selling older 4G networking chips to Huawei, but the company expects its business to gradually decrease next year.

Another report from Reuters also indicated that Qualcomm continues to license its 5G technology portfolio to Huawei, allowing the latter to use HiSilicon’s 5G chips since last year, raising concerns of violating U.S. sanctions. Additionally, according to the same report, documents submitted by Qualcomm this month indicated that its patent agreement with Huawei will expire in the fiscal year 2025, which is earlier than expected, thus prompting negotiations for renewal agreements to begin sooner. Qualcomm has not responded to these reports.

Due to concerns over potential espionage activities by Huawei, the White House included Huawei in the trade restriction list in 2019, which requires suppliers to apply for licenses before shipping goods to blacklisted companies. However, despite this, Huawei suppliers still obtained licenses worth billions of USD to sell goods and technology to the Chinese tech giant, including allowing Intel to sell CPUs starting in 2020.

Republican Representative Elise Stefanik believes that revoking the licenses will strengthen U.S. national security, protect U.S. intellectual property rights, and thus weaken the technological advancement capabilities of communist China.

Previously, U.S. Commerce Secretary Gina Raimondo pointed out that the new chips introduced by Huawei are not as capable and lag behind U.S. chips by several years in performance, indicating that U.S. export controls on China are effective.

Read more

(Photo credit: iStock)

Please note that this article cites information from Reuters and Bloomberg.

  • Page 1
  • 261 page(s)
  • 1303 result(s)

Get in touch with us