Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

駿HaYaO
The sparrow that lives on the Internet
AMD has released its first performance monitoring document for Zen 6, revealing details of its microarchitecture and confirming that Zen 6 is not just an incremental improvement over Zen 5, but a completely new design utilizing TSMC's 2nm process, optimized for data centers.
The Zen 6 core features an 8-wide dispatch engine and simultaneous multithreading (SMT) technology, with two threads dynamically competing for resources, emphasizing throughput rather than extreme single-threaded performance. Compared to Apple’s wide cores, single-thread performance may be slightly inferior, but it is well-suited for high-parallel workloads. The document shows dedicated counters tracking unused dispatch slots and thread arbitration losses, highlighting AMD's focus on width design.
Vector processing capabilities have been significantly enhanced, supporting full-width AVX-512, covering formats such as FP64, FP32, FP16, and BF16, and featuring FMA/MAC and mixed floating-point-integer instructions (like VNNI, AES, SHA). The 512-bit throughput is extremely high, even requiring merged counters for precise measurement, demonstrating its strong potential in intensive mathematical computations.
For the first time, Zen 6 is designed with a data center-centric philosophy, with the EPYC "Venice" supporting up to 256 cores. The features of the client version remain to be seen, but overall, Zen 6 is set to become a performance monster for compute-intensive applications.
26
The world's second-largest analog chip giant Analog Devices, Inc. (ADI) has issued a price increase notice to customers, planning to implement price hikes across its entire product line starting February 1, 2026.
This price increase by ADI is not a one-size-fits-all approach, but rather a differentiated scheme targeting different customer tiers and part numbers, with an overall increase expected to be around 15%, while the price increase for nearly a thousand military-grade MPN products may reach as high as 30%.
The new prices will apply to all unshipped orders, and specific pricing details and the adjustment list are expected to be provided to customers by the end of 2025.

29
Diamond Thermal Solution aims to address the cooling pressure on systems and data centers caused by the rapid rise in TDP of NVIDIA AI GPUs:
1. Advantages of Diamond Material in Thermal Resistance
The traditional "copper lid + TIM + cold plate" thermal path becomes quite tight around 700W, with thermal resistance mainly bottlenecked at the interface area of a few hundred micrometers between the chip and the cold plate. Copper has a thermal conductivity of about 400 W/m·K, while high-grade polycrystalline CVD diamond can reach 1000–1500 W/m·K, and single crystal can even approach 2000 W/m·K, which is at least 3–5 times that of copper. Introducing diamond at the chip level (replacing the current TIM materials) can potentially reduce vertical thermal resistance by over 50% at the same thickness and area, practically lowering the junction temperature of 1–2kW level GPUs by 10–20°C, or allowing for several hundred watts of additional power while maintaining the original temperature limit. This enables the B200/B300 to advance towards 1.2–1.4kW and Rubin/Ultra towards 2.3–3.5kW, allowing the same set of liquid cooling or immersion cooling hardware to support several more generations, while also providing more thermal design space for additional GPUs in single machines and racks.
2. Significant Improvement in Packaging Reliability and Lifespan
When power consumption climbs to 2,000W or even above 3,000W, the temperature gradients and thermal stresses on the packaging, substrate, and motherboard will be magnified. This can lead to packaging warping and TIM voids, or in severe cases, solder fatigue and RDL/bump cracking, affecting long-term reliability. Diamond heat spreaders not only conduct heat well vertically but also have extremely high in-plane thermal conductivity, allowing for rapid flattening of hotspots within a few millimeters, dispersing the originally concentrated 300–500W heat peaks and significantly reducing the temperature differences across different areas of the chip. This effectively "relieves pressure" on the packaging and substrate: the mismatch in thermal expansion between silicon, packaging materials, and substrates is mitigated, extending the warping and solder fatigue cycles. For high-power GPUs like Rubin / Rubin Ultra / Feynman, prolonged LLM training and inference services can operate closer to nominal frequency stability, reducing computational waste caused by overheating throttling or abnormal re-runs, and increasing overall MTBF and lifespan.
3. Cost and Flexibility in Data Center Expansion
As the TDP of a single GPU increases, the total power of a rack quickly approaches or exceeds 120kW or 130kW, necessitating major upgrades to the data center's power distribution and cooling infrastructure. If the thermal conductivity of the chip does not improve, it will only lead to the continuous stacking of more expensive CDUs, cooling towers, and power distribution structures, often forcing the cooling water temperature to be kept very low and flow rates pushed to the limit to manage temperatures. With the introduction of diamond thermal solutions, a single GPU can operate at lower temperatures under the same water temperature and flow rate, reducing the likelihood of throttling, effectively increasing the "stable computing power per rack" available from each cabinet; at the same time, due to reduced thermal resistance, it may allow for slightly higher water temperatures or lower flow rates, decreasing pump and chiller energy consumption. More importantly, it opens up thermal design flexibility for future GPUs like Rubin Ultra and Feynman, which are in the 3.5kW–5kW range, allowing system manufacturers and cloud providers to consider diamond cooling as a "material-level upgrade option" when planning the next generation of AI clusters, integrating cooling into the initial architectural design rather than waiting for thermal failures to find solutions.

150
Top
Ranking
Favorites
