A dense AI model with 32B parameters, excelling in coding, math, and local deployment. Compact, efficient, and powerful ...
Carnegie Mellon University researchers propose a new LLM training technique that gives developers more control over chain-of-thought length.
A team of researchers at Edinburgh University tested some top multimodal large language models to see how well they could ...
While LLMs have achieved impressive results in many domains, their struggles with analogical reasoning suggest that they do not yet match human cognitive flexibility. The study emphasizes the need for ...
Harvey has announced that ‘next-generation agents’ will now be available on its platform, which it defines as ‘systems that ...
Zoom AI Companion expands agentic skills across the entire Zoom platform, using reasoning and memory to take action and orchestrate task ...
Using a variety of analogy puzzles, SFI researchers have shown that the reasoning abilities of OpenAI’s GPT-4 model fall ...
For years, search engines and databases relied on essential keyword matching, often leading to fragmented and context-lacking results. The introduction of generative AI and the emergence of ...
Alibaba developed QwQ-32B through two training sessions. The first session focused on teaching the model math and coding ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results