- Google于4月3日发布了Gemma 4开源模型,采用Apache 2.0许可证,强调其在推理、智能体工作流、多模态及本地设备部署方面的能力。该模型被Google内部人士称为其最强的开源模型,性能据称超越十倍参数量的模型。社区关注点集中在许可证变化,认为此次为真正意义上的开放权重发布,具备广泛下游应用潜力。发布当日即获得vLLM、llama.cpp、Ollama、Intel硬件平台、Unsloth、Hugging Face及Google AI Studio等生态系统的全面支持,实现跨平台即时部署。架构层面采用MoE设计,并集成视觉与音频编码器。
Gemma 4采用Apache 2.0许可
发布当日即获全生态支持
性能优于十倍规模模型
- Latent Space平台近期关于Marc Andreessen的播客内容早期数据显示,该期节目有望成为该平台历史上最受欢迎的播客之一。同时,OpenClaw与Pi等源自欧洲的AI工具开发者将于下周在伦敦进行现场分享。AIE Europe活动直播链接已公布,其中包含一首由OpenClaw创作的歌曲。平台提醒用户可通过订阅通知功能提升内容在算法中的曝光度。
Marc Andreessen播客表现优异
欧洲AI工具开发者将现场分享
AIE Europe直播链接已发布
- AI News在2026年4月3日至4月4日期间监测了12个子版块、544条推文,未发现进一步Discord动态。其网站支持历史内容检索。该平台现已并入Latent Space,用户可自主选择邮件推送频率。本期内容涵盖Gemma 4发布、本地推理性能及生态支持等核心信息。
监测覆盖12个子版块与544条推文
网站支持历史内容搜索
用户可自定义邮件推送频率
1. Gemma 4 Open-Model Release with Apache 2.0 License and Broad Ecosystem Support
Google launched Gemma 4 under the Apache 2.0 license, positioning it as a high-performance open-weight model optimized for reasoning, agentic workflows, multimodality, and on-device inference. The release was accompanied by strong endorsements from key figures: François Chollet praised it as Google’s strongest open model to date and recommended its JAX backend via KerasHub, while Demis Hassabis emphasized its efficiency, claiming it outperforms models ten times larger on internal benchmarks. A notable shift from previous restrictive licenses, the Apache 2.0 designation was widely interpreted as enabling broader commercial and research use, with leaders like Clement Delangue and QuixiAI highlighting its “real” open-weights status.
The model achieved rare day-0 ecosystem readiness, with immediate support across major platforms including vLLM (supporting GPU, TPU, and XPU), llama.cpp, Ollama, Intel hardware (Xeon, Xe GPU, Core Ultra), Unsloth for local fine-tuning, Hugging Face Inference Endpoints for one-click deployment, and integration with Google AI Studio. Technical deep dives by experts like Omar Sanseviero and Maarten Grootendorst provided detailed visual analyses of its Mixture-of-Experts (MoE) architecture, vision/audio encoders, and layer-wise embeddings. Local inference benchmarks indicated strong performance, though detailed results were not fully disclosed.
Key Takeaways:
Gemma 4 offers open access under Apache 2.0, enabling wide commercial use
Ecosystem support launched simultaneously across major AI deployment platforms
Model efficiency claims suggest performance gains over much larger models
Technical architecture emphasizes multimodality and on-device inference capabilities
Source: Original Article
查看原文 →
View Original →