Artificial Intelligence thread

Hyper

Junior Member
Registered Member
Google is slow but has the well-rounded talent and resources to progress. Google Voice is superior to US competitors for both English and Chinese. Gemini is free and doesn't require phone number registration, a big deal.



A even bigger hack on 4090 rumored:
View attachment 146312
Yes this is what the Ada rtx Quadro was supposed to be. VRAM in clamshell arrangment. There is a 50 series model with 96 GB in testing. What they have done is they have removed the GPU from the board and reassembled it on another board with more capacity. Nvidia has a lot of market segmentation so these exotic products are possible if one knows how to achieve it.
 

SanWenYu

Captain
Registered Member
Tsinghua and RealAI jointly released RealSafe R1, a hardened and optimized version of DeepSeek R1. They added "security awareness" to the model in "post-training" to protect against malicious prompts. This hardened version scores better than Claude 3.5 and GPT-4o in security tests.

Please, Log in or Register to view URLs content!

2月23日,清华大学与瑞莱智慧联合团队推出大语言模型RealSafe-R1。该模型基于DeepSeek R1进行深度优化与后训练,在确保性能稳定的基础上,实现了安全性的显著提升,表现优于被认为安全性较好的闭源大模型Claude3.5、GPT-4o等,为开源大模型安全发展与风险治理提供了创新性解决方案。

连日来,国产开源大模型DeepSeek的优异表现令人惊艳,其在自然语言处理和多任务推理方面的能力展现了强大的技术实力,尤其是在复杂问题解决和创造性任务中表现出色。然而,DeepSeek R1和V3在面对越狱攻击等安全性挑战时仍存在一定的局限性。例如,模型可能会被恶意设计的输入误导,生成不符合预期或不安全的响应。这一安全短板并非DeepSeek所独有的难题,而是当前开源大模型的通病,其根本缺陷源于安全对齐机制的深度不足。

为此,清华瑞莱联合团队提出了基于模型自我提升的安全对齐方式,将安全对齐与内省推理相结合,使大语言模型能够通过具有安全意识的思维链分析来检查潜在风险,实现基础模型自身能力的自主进化,可以应用于多种开源、闭源模型。

基于该方法,团队对DeepSeek-R1系列模型进行后训练,正式推出RealSafe-R1系列大模型。实验数据表明,RealSafe-R1安全性显著提升,在多个基准测试中有效增强了对各种越狱攻击的抵抗力,并减轻了安全与性能之间“跷跷板”现象,表现优于被认为安全性较好的闭源大模型Claude3.5、GPT-4o等,不仅为DeepSeek生态添砖加瓦,也为大语言模型树立了新的安全标杆。

据悉,RealSafe-R1各尺寸模型及数据集,将于近期向全球开发者开源。“大模型的安全性瓶颈是本质的,只有通过持续投入和攻坚补齐短板弱项,人工智能产业才能真正实现高质量发展。”瑞莱智慧首席执行官田天表示,这一创新成果将为开源大模型的安全性加固提供有力支持,也为政务、金融、医疗等严肃场景的应用提供更为可靠的坚实基座。

1740422094437.png
 

Bellum_Romanum

Brigadier
Registered Member
Google is slow but has the well-rounded talent and resources to progress. Google Voice is superior to US competitors for both English and Chinese. Gemini is free and doesn't require phone number registration, a big deal.



A even bigger hack on 4090 rumored:
View attachment 146312
Why would it require a number when it already has your email and a hosts of other data sets that it's actually using and utilizing to feed it's AI against the very ethics they swore to never do: "Don't be evil"

If Chinese software tech companies have the same clout/influence/tech hold in the world outside of China like Google does, it's not inconceivable that they too or some at least would function in a similar fashion or manner like Google.

Chinese tech companies have to operate differently in a much more challenging landscape ever since due to the political nature of the country, and the pressing geopolitics/geo-economics that are going against and for China. Essentially, China has been handicapped from the word for, whereas other East Asian countries and region have been given relatively free reign/access to develop their own ecosystem. The fact that Japan sucks when it comes to software is mighty surprising for me if I am being honest. South Korean firms are at least making some headways in that regard.
 

sunnymaxi

Major
Registered Member
Huawei Noah's Ark Laboratory officially released the new ESA algorithm (Efficient Selective Attention)



Through the innovative design of sparse attention, ESA breaks through the bottleneck of large models in long text processing. ESA not only achieves the expansion of sequence length by several times, but also introduces an original dynamic computing paradigm, which effectively avoids the performance loss caused by simply selecting top-ranked tokens by combining neighborhood influence. Through the precise selection of key tokens, ESA optimizes the efficiency of long sequence processing while improving computing performance, bringing new possibilities for the application of large models in long sequence tasks.

In the reasoning process of large language models, the training of long sequence models requires extremely high computing power and massive data support. The ideal solution is to extrapolate the training results of short sequences to long sequences. However, as the length of the sequence increases, the complexity of attention calculation increases quadratically, which makes efficient and accurate long sequence reasoning a major challenge. To this end, researchers have proposed a variety of methods to meet this challenge.

The ESA scheme is an innovative extrapolation solution proposed in this context. ESA effectively reduces the computational complexity of token selection by low-dimensional compression of query and key. This scheme greatly reduces the computational burden of LLMs when processing long texts by flexibly and efficiently selecting key tokens for attention calculation. Its performance is comparable to that of the full attention extrapolation method, and even outperforms the full attention algorithm in high-multiple extrapolation scenarios, achieving effective expansion of context length.

Please, Log in or Register to view URLs content!
 

OptimusLion

Junior Member
Registered Member
The second bomb of DeepSeek Open Source Week is released! DeepEP!

The first open source EP communication library for MoE model training and inference.

This library is really a workhorse of MoE models, which can improve the throughput between GPU cores and reduce latency. And the library also supports low-precision operations, such as FP8.

In keeping with the group-limited gating algorithm proposed in the DeepSeek-V3 paper, DeepEP provides a set of kernels optimized for asymmetric domain bandwidth forwarding, such as forwarding data from the NVLink domain to the RDMA domain. These kernels provide high throughput, making them suitable for training and inference pre-population tasks. In addition, they also support SM (Streaming Multiprocessor) number control.

For latency-sensitive inference decoding, DeepEP includes a set of low-latency kernels that use pure RDMA to minimize latency. The library also introduces a hook-based communication computation overlap method that does not occupy any SM resources.

Note that this library still only supports Hopper GPUs (i.e. H100, H200, H800. Consumer-grade graphics cards are not supported yet)

65ba2c80ly1hywn4uxqf0j214w11a18z.jpg

github.com/deepseek-ai/DeepEP

 
Last edited:

mossen

Junior Member
Registered Member
Alibaba released their reasoning model for Qwen.

Please, Log in or Register to view URLs content!

Gave it a spin against Deepseek's R1. Used thinking and web search for both models.

Just a simple vibe question. Told them a specific sum for 2018 and then asked to adjust for inflation "today". I purposefully did not give a date to see how well they would approximate today being Feb of 2025.

DeepSeek failed miserably, gave me Oct 2023. The answer was obviously incorrect as a result of that.

By contrast, Qwen gave me the correct date (Feb 2025) but also approximated inflation adjustment as far as it could. Its final answer was very close to what I calculated. Moreover, DeepSeek was very slow whereas Qwen was much faster (but still not as fast as it should be). I really like DeepSeek but clearly it should not rest on its laurels. The other Chinese labs are catching up fast.
 

OptimusLion

Junior Member
Registered Member
The joint team of Shanghai Jiao Tong University and Wuwen Xinqiong won the ASP-DAC'25 Best Paper Award


The paper "ViDA: Video Diffusion Transformer Acceleration with Differential Approximation and Adaptive Dataflow" written by Associate Professor Dai Guohao of the School of Artificial Intelligence of Shanghai Jiao Tong University stood out from about 600 submissions and won the Best Front-end Paper Award. The sparse accelerator of the AI video generation model proposed in the paper has for the first time solved the bottleneck of slow VDiT generation speed through differential approximation and adaptive data flow, and the inference speed has increased by up to 16.44 times compared with A100.

20250126_140501_772.jpeg

20250126_140520_851.png

Please, Log in or Register to view URLs content!
 

Eventine

Junior Member
Registered Member
Anthropic is back on the throne for coding for the time being, but their "hybrid" model claim is mostly a marketing trick - reasoning being built on top of an existing base model, any of the current reasoning services can be told not to reason. Not to mention the new 3.7 Claude model is entirely behind the high Anthropic pay wall, so only enterprises will be paying for it. Just doesn't seem to hold that much value with a $15 / million tokens cost, especially since they'll charge you for the reasoning tokens. Anthropic's refusal to lower their prices even after Deep Seek is probably going to back fire.

That said, with all the recent model releases, it looks like the US is being forced to mount a counter attack - probably sooner than they would have wanted, given the lack of anything ground breaking. Out of the recent releases, Grok 3, o3-mini, and Gemini 2.0 Flash Thinking are the most promising American offerings and do appear to be moving the needle forward on value proposition. But we'll see how long they can last.
 
Top