Dear friends, OpenAI updated the usage policy of ChatGPT on October 29. This time, the model is clearly prohibited from providing professional medical, legal or financial advice!
This is mainly done to avoid regulatory risks, reduce the hidden danger of misleading people, and redefine the application boundaries of AI in high-risk fields. ChatGPT can no longer do things like interpreting medical images, assisting in diagnosis, drafting or interpreting legal contracts, providing personalized investment strategies or tax planning. If users raise such demands, the system will uniformly reply to guide them to consult human experts. Moreover, this policy covers all ChatGPT models and API interfaces to ensure consistent implementation.

Although professionals can still use it for general concept discussion or data organization, they cannot directly provide "fiduciary" advice to end users. This adjustment is driven by global regulation. The EU's Artificial Intelligence Act is about to take effect, which will conduct strict reviews on high-risk AI, and the US FDA requires clinical verification for diagnostic AI tools. By doing so, OpenAI can avoid being recognized as "software as a medical device" and also prevent potential lawsuits.
Regarding this new rule, users' reactions are divided into two camps. Some individual users feel quite regretful because they have lost the "low-cost consultation" channel. After all, they had saved a lot of professional consultation fees by relying on AI before. However, most of the medical and legal circles support it. After all, the "pseudo-professional" output of AI is indeed likely to lead to misdiagnosis or disputes. Data shows that over 40% of ChatGPT queries are of the advice type, and medical and financial advice account for nearly 30%. This policy may lead to a short-term decline in traffic.
It also has a significant impact on the industry. Google, Anthropic, etc. may also follow suit and impose restrictions. Vertical AI tools, such as certified legal/medical models, may become popular. Chinese companies like Baidu have already complied in advance. Under the situation of stricter domestic regulation, innovation has to be explored within the "sandbox" mechanism.
OpenAI emphasizes that the goal is to "balance innovation and safety". This update continues the Model Spec framework, and it is said that there will be further iterations in February 2025. The transformation of AI from an "omnipotent assistant" to a "limited assistant" seems to have become an industry consensus. In the future, technological breakthroughs and ethical constraints will develop together. I wonder what new balance the GPT-5 era will bring?
What do you think of this new rule of ChatGPT? Come and share your thoughts in the comment section!
Topic tags and keywords: #OpenAI #ChatGPT #UsagePolicyUpdate #MedicalAdvice #LegalAdvice #FinancialAdvice #AISupervision #IndustryImpact


%s에 답글 남기기