The Experience Technology Department of Alipay, Ant Group, has officially open-sourced the intelligent programming assistant Neovate Code. It can deeply understand your codebase, follow the existing coding habits, and accurately complete function implementation, bug fixing, and code refactoring based on context awareness. It integrates the core capabilities required by Code Agent. GitHub:https://github.com/neovateai/neovate-code
At present, Neovate Code is provided in the form of a CLI tool, but its architecture is highly flexible and will support multiple client forms in the future to adapt to more development scenarios.
Its main functions include: Conversational development - A natural dialogue interface for programming tasks AGENTS.md rule file - Define custom rules and behaviors for your project Conversation continuation and resumption - Continue previous work across conversations Support for popular models and providers - OpenAI, Anthropic, Google, etc. Slash commands - Quick commands for common operations Output style - Customize the way code changes are presented Planning mode - Review the implementation plan before execution Headless mode - Automate the workflow without interactive prompts Plugin system - Extend functionality with custom plugins MCP - Model context protocol for enhanced integration Git workflow - Intelligent commit message and branch management …
Dear friends, there's new news about DeepSeek! The latest model, DeepSeek-V3.1-Terminus, has made its debut! 👏
This version comes in two modes: the thinking model and the non-thinking mode, both with a context length of 128k. It is an upgrade based on DeepSeek-V3.1 and has two major improvements. First, in terms of language consistency, it alleviates the mixing of Chinese and English and the occurrence of occasional abnormal characters. For example, the "extreme" character issue mentioned before has also been improved. Second, in terms of Agent capabilities, the performance of Code Agent and Search Agent has been further optimized, making them even more outstanding. DeepSeek's last update was on August 21st. It's only been a month, and the new model DeepSeek-V3.1-Terminus has outperformed Gemini 2.5 Pro in many evaluations.
However, in terms of benchmark performance, compared to DeepSeek-V3.1, it has only a slight overall upgrade, and there is a slight decline in some benchmarks. But in the Humanity's Last Exam benchmark, the improvement is huge, as high as 36.48%, jumping from 15.9 to 21.7. That's really amazing!
Now, DeepSeek-V3.1-Terminus has been launched on apps, web pages, and APIs.
Here are two addresses for you: Hugging Face 地址: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus
By the way, the word "Terminus" means "end". Does this imply that this is the last version of the V3 series and that DeepSeek - V4/R2 is coming soon? It's really exciting!
Dear friends, what do you think of DeepSeek-V3.1-Terminus? Come and share your thoughts in the comments section!
자기들이, 상하이교통대학교 IPADS 연구실 팀이 대단한 일을 했어! 그들이全新의 모바일 에이전트 툴체인 MobiAgent를 출시했어🎉. 이건 정말 대단한데, 개인화된 지능형 어시스턴트 개발 장벽을 뚫고, 실제 시나리오에서의 성능이 GPT-5와 다른 최고급 클로즈드 소스 모델보다 뛰어난다고도 합니다👍
MobiAgent는 정말 대단합니다. 누구나 자신만의 AI 어시스턴트를 만들 수 있는 기회를 제공합니다. 이 도구 체인은 사용자가 모바일 에이전트를 처음부터 구축할 수 있도록 지원하며, 작업 데이터 수집, 모델 학습, 핸드폰으로의 배포까지 일련의 프로세스를 모두 처리할 수 있습니다. 그리고 오픈 소스입니다. 사용자는 자신만의 데이터를 수집하고 모델을 학습시켜 개인 기기에서 지능형 어시스턴트를 사용할 수 있습니다. 너무 편리하죠🥰
그 성능을 검증하기 위해 연구팀은 국내 인기 있는 20개의 앱에서 테스트를 수행했습니다. 결과는 70억 규모의 MobiAgent 모델이 작업 완료 점수에서 많은 유명한 클로즈드 소스 대형 모델을 능가했으며, 같은 규모의 오픈 소스 GUI 에이전트 중에서도 선두에 있다는 것을 보여줍니다👏. 독특한 "잠재 메모리 가속기"는 과거 작업을 학습하여 에이전트가 반복적인 작업을 빠르게 완료할 수 있도록 돕고, 성능을 2~3배 향상시킵니다.
MobiAgent의 핵심은 효율적인 데이터 수집과 지능형 학습 프로세스에 있습니다. 경량 도구를 사용하여 사용자의 핸드폰 작업을 기록한 다음 범용 VLM 모델을 사용하여 고품질 학습 데이터를 생성합니다. 정련된 조정을 통해 학습된 에이전트는 뛰어난 일반화 능력을 갖게 됩니다. 그 "뇌"는 세 부분으로 나뉩니다. "기획자"는 작업 계획을 책임지고, "결정자"는 화면을 기반으로 결정을 내리고, "실행자"는具體적인 작업을 수행합니다. 이러한 구조로 모델 학습이 더욱 효율적이 되고 응답 속도도 크게 향상됩니다😎
또한 혁신적인 AgentRR 가속 프레임워크가 있어 과거 작업 경험을 활용해 반복 작업 실행 효율을 크게 향상시킬 수 있으며, 동작 재사용률은 최대 60%~85%에 달할 수 있습니다. 지능형 어시스턴트는 일상적인 업무를 빠르고 정확하게 처리합니다.
MobiAgent의 출현은 개인용 지능형 어시스턴트 맞춤화를 편리하게 할 뿐만 아니라 모바일 에이전트 생태계 발전을 촉진합니다. "말로만 하면 손을 움직이지 않아도 된다"는 지능 시대가 정말 다가올 것 같습니다🤩
자기들이, OpenAI가 또 대단한 행보를 보였어! 오늘 ChatGPT의 프로젝트 기능이 무료 사용자에게 공식적으로 개방된다고 발표했는데, 정말 대박이에요👏
이번 업데이트는 다양한 사용자 그룹에게 기능 개선을 가져왔어. 먼저 대용량 파일 업로드 제한에 대해 말하면, 무료 사용자는 하루에 최대 5개의 파일을 업로드할 수 있고, Plus 사용자는 25개로 늘어나며, Pro, 비즈니스 및 엔터프라이즈 버전 사용자는 40개의 파일을 업로드할 수 있어. 이런 계층화된 설계는 정말 친절해. 당신의 필요가 크든 작든, 자신에게 적합한 사용 방법을 찾을 수 있어요🥰
또한 OpenAI는 많은 개인 맞춤형 설정 기능을 추가했어. 지금 사용자들은 프로젝트의 색상과 아이콘을 사용자 정의할 수 있게 되었고, 관리 인터페이스가 갑자기 매우 개인화되어 업무 효율도 많이 향상될 거예요. 맥락 일관성을 유지해야 하는 분들에게 새로 추가된 프로젝트 전용 메모리 제어 기능은 정말 유용해. 다양한 대화 시나리오에 더 잘 적응할 수 있고 정보 관리도 쉽고 편해지겠죠😎
이 일련의 업데이트는 OpenAI가 우리 사용자들의 니즈에 얼마나 주의를 기울이고 있는지 충분히 보여줘. 기업 사용자也好 개인 사용자也好,이 새로운 기능들로 ChatGPT를 사용하는 경험이 더 원활해질 거예요.
말하지 않을 수 없이, OpenAI의 이번 업데이트는 사용자 경험의 매우 큰 업그레이드야. 플랫폼의 매력이 더욱 강해졌고, 더 많은 사용자들이 AI가 가져다주는 편리함을 평등하게 누릴 수 있게 됐어. 앞으로 ChatGPT는 확실히 계속 최적화될 거고, 더 많은 놀라움을 기대해봅시다🤩
자기들이, ChatGPT의 이 새로운 기능들이 기대되나요? 댓글 창에서 이야기 나눠보세요🧐
#ChatGPT #신기능 출시 #프로젝트 관리 #사용자 경험 #무료 사용자 #개인 맞춤형 설정
Dear friends, here's some big news! At 00:00 on September 1, 2025, the "Measures for the Identification of Artificial Intelligence-Generated and Synthesized Content" jointly formulated by multiple government departments officially took effect! 🎉 This measure puts forward regulatory requirements such as the mandatory addition of explicit and implicit identifications. From now on, AI-generated text, images, audio, and video must all show their "digital ID cards"🧐
Before this, many platforms such as Tencent, Douyin, Kuaishou, and Bilibili had already introduced detailed rules. Take Douyin for example, it has launched an AI content identification function and an AI content metadata identification reading and writing function, which help creators add prompt identifications and also provide technical support for content traceability👏
Now the ecological chain of AI-generated content has entered a stage of standardized management. Artificial intelligence is developing extremely rapidly. In 2024, the scale of China's artificial intelligence industry exceeded 700 billion yuan and has maintained a high growth rate year after year. However, the popularization of technology has also brought new risks. For example, there are more and more cases of it being used to create false news and carry out online fraud.
The core of the policy of the "Measures for the Identification" is the requirement of dual identifications. Explicit identifications should be "visible at a glance" to ordinary users. For example, add text explanations at the beginning and end of an article, or add voice prompts or special icons in audio and video. Implicit identifications, on the other hand, are to embed "hidden information" in the file metadata, including various key information.
This measure is of great significance. Professor Ren Kui, one of the drafters, said that it is the first time to include generation service providers, content dissemination platforms, and end users in a unified governance framework, forming a system progression with other regulations and clarifying the boundaries of responsibility. It can promote the standardized development of the AIGC industry, reshape the public's trust in AIGC technology, and also enhance China's voice in the field of artificial intelligence security governance, providing a model for global content governance👍
Let's talk about the dual identification system again. Explicit identifications should be directly perceived by users. Texts should mark words such as "generated by artificial intelligence" in specific positions, and the font should be clear. Implicit identifications focus on technical traceability, embedding metadata inside the file, containing various key information. There are clear labeling requirements for different types of AI-generated content.
The "Measures for the Identification" also encourages the use of AI for original content creation. Moreover, it clarifies the obligations of different entities at the legal level. Service providers need to ensure that the content meets the identification requirements. Dissemination platforms need to verify implicit identifications and add significant prompt identifications. Application distribution platforms need to verify the identification functions of service providers.
However, the implementation of this measure also faces challenges. Users may delete explicit identifications or avoid implicit ones through transcoding, making it difficult to accurately identify the content posted by malicious users. Lawyers suggest that content publishing platforms should assume more responsibilities. Professor Ren Kui suggests from a technical perspective the development of secure content implicit identification technology.
All in all, identification is a crucial step in the governance of AI-generated content. But to truly avoid risks, it is also necessary to refine laws and regulations, establish industry self-discipline standards, strengthen law enforcement efforts, and enhance international cooperation. Cross-border AIGC law enforcement is also a challenge. In the future, it is necessary to promote the coordination of technical identifications and establish cross-border law enforcement mutual assistance mechanisms. Dear friends, what do you think about the mandatory "labeling" of AI-generated content? 🤔
#AI-generated content #Mandatory labeling #Content security governance #Dual identification system #Main body responsibility #Supervision challenges
On the evening of August 19th, DeepSeek officially announced that the online model version has been upgraded to V3.1. The most significant improvement is that the context length has been extended to 128K, which is equivalent to being able to process super-long texts of 100,000 to 130,000 Chinese characters, suitable for long document analysis, code library understanding and multi-round dialogue scenarios.
Users can now experience the new version through the official website, App or WeChat mini-program. The API interface call method remains unchanged, and developers can switch seamlessly without additional adjustments.
This upgrade is not a major version iteration, but an optimization of the V3 model. Tests show that V3.1 has a 43% improvement in multi-step reasoning tasks compared to the previous generation, especially more accurate in complex tasks such as mathematical calculations, code generation and scientific analysis. Meanwhile, the situation of the model's "hallucination" (generating false information) has decreased by 38%, and the output reliability has been significantly enhanced. In addition, V3.1 has also optimized multilingual support, especially improving the processing ability of Asian languages and less common languages.
Although V3.1 brings important improvements, the release time of the next-generation large model DeepSeek - R2, which users are more looking forward to, is still uncertain. Previously, there was market speculation that R2 would be released from August 15th to 30th, but insiders close to DeepSeek said that this news is not true and the official has no specific release plan at present.
DeepSeek's update rhythm indicates that the V4 model may be launched before the release of R2. However, the official has always been low-key, emphasizing that "it will be released when it's done" and has not responded to any market speculation.
Recently, the news of the release of DeepSeek's next-generation large model DeepSeek - R2 has attracted widespread attention in the market. There is a rumor that DeepSeek - R2 will be released between August 15th and 30th. However, according to Tencent Technology, sources close to DeepSeek have confirmed to the media that this news is not true and DeepSeek - R2 has no release plan this month.
As early as the beginning of this year, news about the R2 model had already started to spread. At that time, it was predicted that the R2 model would be released on March 17th, but this claim was also denied by the official. So far, DeepSeek has not officially announced the specific release time and technical details of the R2 model, which has disappointed many observers.
According to reports, the DeepSeek team stepped up the development of the R2 model in June this year. Insiders revealed that CEO Liang Wenfeng is still not satisfied with the capabilities of the model, and the team is still improving its performance and is not ready for official use. Early news said that DeepSeek originally planned to launch the R2 model in May, but due to various reasons, the plan was delayed. The new model is expected to be able to generate higher quality code and have the ability to reason in non-English languages.
On August 7, 2025, OpenAI officially released the GPT-5 series of models, which represents the most significant product upgrade in the company's history. This release includes four versions: GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Pro, each deeply optimized for different application scenarios, marking a new stage of development for AI technology.
Unified Intelligent System: A Revolutionary Breakthrough in Technical Architecture GPT-5 is positioned by OpenAI as a "unified intelligent system", successfully integrating capabilities that were previously scattered across different models: the multimodal processing of GPT-4o, the deep reasoning of the o series, advanced mathematical calculation, and agent task execution. This architectural innovation eliminates the need for users to manually switch between different models. The system automatically selects the most suitable processing method based on task complexity through a real-time router.
In terms of core technical indicators, GPT-5 has achieved a comprehensive breakthrough:
Mathematical Reasoning: Achieved an accuracy rate of 94.6% in the AIME 2025 benchmark test without the need for external tools. Code Capability: Scored 74.9% in the SWE-bench Verified test and 88% in the Aider Polyglot multilingual programming test. Multimodal Understanding: Scored 84.2% in the MMMU benchmark test. Professional Knowledge: Scored 88.4% in the GPQA general question answering test. Detailed Analysis of the Four Versions
未来规划也超值得期待😆,小米已经在对该模型进行计算效率的进一步升级,目标是在终端设备上实现离线部署。这意味着用户能在不依赖云端服务的情况下享受高质量音频 AI 服务,隐私保护更好,使用成本更低,还能为小米在 IoT 生态里的音频 AI 应用提供技术支持。另外,小米还在完善基于用户自然语言提示的声音编辑功能,以后通过简单文字描述就能完成复杂音频处理任务,音频编辑技术门槛大大降低啦🤩
小米选择全量开源 MiDashengLM-7B,真的超有意义👏。这能推动整个音频 AI 领域的技术进步,给研究者和开发者提供学习改进的好机会。开源能加速音频 AI 技术的普及应用,让更多创新应用出现,推动行业生态繁荣发展🎉
宝子们,感觉音频 AI 的新时代要来了,你们对这个 MiDashengLM-7B 怎么看呀🧐,快来评论区聊聊😜
#小米 #MiDashengLM7B #音频 AI #开源模型 #多模态大模型 #音频理解 #技术突破 #推理效率