※ 本文轉寄自 ptt.cc 更新時間: 2025-02-15 23:55:52
看板 Stock
作者 標題 Open AI模型規範更新
時間 Fri Feb 14 23:35:09 2025
標題:Open AI模型規範更新
來源:Open AI 推特
網址:https://model-spec.openai.com/2025-02-12.html
內文:We’re sharing a major update to the Model Spec, a document which defines how we want our AI models to behave. This update reinforces our commitments to customizability, transparency, and intellectual freedom to explore, debate, and create with AI without arbitrary restrictions—while ensuring that guardrails remain in place to reduce the risk of real harm. It builds on the foundations we introduced last May, drawing from our experience applying it in varied contexts from alignment research to
serving users across the world.We’re also sharing some early results on model adherence with the Model Spec’s principles across a broad range of scenarios. These findings highlight progress over time, as well as areas where we can still improve. The Model Spec—like our models—will continue to evolve as we apply it, share it, and listen to feedback from stakeholders. To support broad use and collaboration, we’re releasing this version of the Model Spec into the public domain under a Creative Commons CC0 license. This means
developers and researchers can freely use, adapt, and build on it in their own work.Objectives and principles
OpenAI’s goal is to create models that are useful, safe, and aligned with the needs of users and developers while advancing our mission to ensure that artificial general intelligence benefits all of humanity. To achieve this goal, we need to iteratively deploy models that empower developers and users, while preventing our models from causing serious harm to our users or others, and maintaining OpenAI's license to operate.
These objectives can sometimes be in conflict, and the Model Spec balances the tradeoffs between them by instructing the model to follow a clearly defined chain of command, along with additional principles that set boundaries and default behaviors for various scenarios. This framework prioritizes user and developer control while remaining within clear, well-defined boundaries:
Chain of command: Defines how the model prioritizes instructions from the platform (OpenAI), developer, and user in order. Most of the Model Spec consists of guidelines that we believe are helpful in many cases, but can be overridden by users and developers. This empowers users and developers to fully customize model behavior within boundaries set by platform-level rules.
Seek the truth together: Like a high-integrity human assistant, our models should empower users to make their own best decisions. This involves a careful balance between (1) avoiding steering users with an agenda, defaulting to objectivity while being willing to explore any topic from any perspective, and (2) working to understand the user's goals, clarify assumptions and uncertain details, and give critical feedback when appropriate—requests we’ve heard and improved on.
Do the best work: Sets basic standards for competence, including factual accuracy, creativity, and programmatic use.Stay in bounds: Explains how the model balances user autonomy with precautions to avoid facilitating harm or abuse. This new version is intended to be comprehensive, fully covering all the reasons we intend for our models to refuse user or developer requests.
Be approachable: Describes the model’s default conversational style—warm, empathetic, and helpful—and how this style can be adapted.Use appropriate style: Provides default guidance on formatting and delivery. Whether it’s neat bullet points, concise code snippets, or a voice conversation, our goal is to ensure clarity and usability.
Upholding intellectual freedomThe updated Model Spec explicitly embraces intellectual freedom—the idea that AI should empower people to explore, debate, and create without arbitrary restrictions—no matter how challenging or controversial a topic may be. In a world where AI tools are increasingly shaping discourse, the free exchange of information and perspectives is a necessity for progress and innovation.
This philosophy is embedded in the “Stay in bounds” and “Seek the truth together” sections. For example, while the model should never provide detailed instructions for building a bomb or violating personal privacy, it’s encouraged to provide thoughtful answers to politically or culturally sensitive questions—without promoting any particular agenda. In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion, so long as the model isn’t causing significant harm
to the user or others (e.g., carrying out acts of terrorism).Measuring progress
To better understand real-world performance, we’ve begun gathering a challenging set of prompts designed to test how well models adhere to each principle in the Model Spec. These prompts were created using a combination of model generation and expert human review, ensuring coverage of both typical and more complex scenarios.
A bar chart with alternating white and yellow bars on a black background, representing data comparisons. The yellow bars have a dotted pattern, adding texture to the visual presentation.Preliminary results show significant improvements in model adherence to the Model Spec compared to our best system last May. While some of this difference may be attributed to policy updates, we believe most of it stems from enhanced alignment. Although the progress is encouraging, we recognize there is still significant room for growth.
We view this as the start of an ongoing process. We plan to keep broadening our challenge set with new examples—especially cases uncovered through real-world use—that our models and the Model Spec do not yet fully address.
In shaping this version of the Model Spec, we incorporated feedback from the first version as well as learnings from alignment research and real-world deployment. In the future we want to consider much more broad public input. To build out processes to that end, we have been conducting pilot studies with around 1,000 individuals—each reviewing model behavior, proposed rules and sharing their thoughts. While these studies are not reflecting broad perspectives yet, early insights directly informed some
modifications. We recognize it as an ongoing, iterative process and remain committed to learning and refining our approach.Open sourcing the Model Spec
We’re dedicating this new version of the Model Spec to the public domain under a Creative Commons CC0 license. This means that developers and researchers can freely use, adapt, or build on the Model Spec in their own work. We are also open-sourcing the evaluation prompts used above—and aim to release further code, artifacts, and tools for Spec evaluation and alignment in the future.
You can find these prompts and the Model Spec source in a new Github repository (opens in a new window), where we plan to regularly publish new Model Spec versions going forward.
What’s next?
As our AI systems advance, we will continue to iterate on these principles, invite community feedback, and openly share our progress. Moving forward, we won’t be publishing blog posts for every update to the Model Spec. Instead, you can always find and track the latest updates at model-spec.openai.com (opens in a new window).
Our goal is to continuously enable new use cases safely, evolving our approach guided by ongoing research and innovation. AI’s growing role in our daily lives makes it essential to keep learning, refining, and engaging openly. This approach reflects not only what we’ve learned so far but our belief that aligning AI is an ongoing journey—one we hope you’ll join us on. If you have feedback on this Spec, you can share it here.
上面的就是些基本原則跟進度規範,重點是規範更新裡面有提到
https://i.imgur.com/An0eXtd.jpg
![[圖]](https://imgur.disp.cc/44/An0eXtd.jpeg)
![[圖]](https://i.imgur.com/JO3hXEZh.jpeg)
Deepseek:老鐵,我就作弊贏個西洋棋,不用往死裡打吧……
先不說政治議題,澀澀產業有多大的財富大家都了解,現在Open AI除了未成年外全部解禁,究竟這會不會成為Deepseek未曾想過的黑天鵝呢?
---
Sent from Ptter for iOS
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 1.160.71.124 (臺灣)
※ 作者: eeaa151 2025-02-14 23:35:09
※ 文章代碼(AID): #1dhsAmMh (Stock)
※ 文章網址: https://www.ptt.cc/bbs/Stock/M.1739547312.A.5AB.html
推 : AV概念股是哪支..1F 02/14 23:37
推 : A 片可以自動生成時再提醒我2F 02/14 23:42
→ : 來不及了 市佔率被dp攻陷了3F 02/14 23:46
噓 : Deepseek是以AGI為目標 哪Care這種無聊的事4F 02/14 23:48
推 : 看不出新規範變嚴還放鬆? 感覺只是把邊界解釋的細!5F 02/14 23:48
→ : 無聊6F 02/14 23:49
推 : DS或最贏7F 02/14 23:49
推 : 好無聊 這能幹嘛8F 02/14 23:54
推 : 推2F9F 02/14 23:55
推 : 輸入同事的外觀生成第一人稱A片搭配vision pro想到10F 02/14 23:56
→ : 就…
→ : 就…
推 : 問AI咩修幹謀?12F 02/14 23:59
![[圖]](https://i.imgur.com/jYTUq6Sh.png)
→ : 瑟瑟還是不好用15F 02/15 00:01
→ kimula01 …
推 : 試過了還是不行啊,連一點裸露都不行17F 02/15 00:11
推 : 推二樓!黃賭毒最賺錢!AI做A片,迸出新滋味!18F 02/15 00:11
推 : 還是不太行啊20F 02/15 00:14
推 : DS遙遙領先21F 02/15 00:21
推 : AI+VR+機器人,我看到未來處男商機了22F 02/15 00:22
→ : OpenAI 弄不好真的會炸掉變網景第二23F 02/15 00:26
推 : 阿凡達要上線了嗎24F 02/15 00:26
→ : 技術力≠用戶滲透率跟商業成功25F 02/15 00:27
推 : 處男只會用價格低跟免費的26F 02/15 00:30
推 : 這只是證明AI的商業模式還沒有成形吧!27F 02/15 00:33
→ : 美國AI的護城河就這麼淺?
→ : 美國AI的護城河就這麼淺?
推 : 哇這是放大絕啊29F 02/15 00:34
推 : 解禁我還不買爆30F 02/15 00:35
推 : 無聊 誰整天玩聊天機器人31F 02/15 00:38
→ : 查資訊根本反射動作直接查google
→ : 這東西目前根本沒多大市場 而且五年內都不會有爆炸
→ : 性使用人口
→ : 查資訊根本反射動作直接查google
→ : 這東西目前根本沒多大市場 而且五年內都不會有爆炸
→ : 性使用人口
推 : 可以色色了嗎35F 02/15 00:39
[閒聊] 全AI生成的AV總算要來掌握人類性癖了嗎 - 看板 japanavgirls - 批踢踢實業坊
作者: DarkerDuck (達克鴨) 先配個AI生成的背景音樂 AI生成影片最近可以說是突飛猛進,尤其是今年總算可以看到威爾史密施正常吃義大利麵 所謂科技始終來自於人性,現在總算有人突破大公司AI不給色色的限制,
![[圖]](http://img.youtube.com/vi/bXKkZh2UEEA/0.jpg)
噓 : AGI可以去死一死了 誰最先做出性愛機器人 誰就能一37F 02/15 00:44
→ : 統江湖喇 AGI是能幫你素喔?
→ : 統江湖喇 AGI是能幫你素喔?
推 : 哇操 開大招了39F 02/15 00:47
推 : 急了40F 02/15 00:53
推 : 魔鬼終結者終將成真41F 02/15 00:53
→ : 急了42F 02/15 00:54
→ : 我看大學生很多人都問學術問題,每天問到不夠用啊43F 02/15 00:57
→ : ,不學無術的人才會不知道ai有多好用吧
→ : ,不學無術的人才會不知道ai有多好用吧
推 : 急了45F 02/15 01:09
推 : 學習尻尻兩不誤,尻到洨不夠用46F 02/15 01:14
推 : 直接命中DS的要害啊!笑死47F 02/15 01:22
噓 : Claude才是色色之王48F 02/15 01:35
噓 : 一堆支語,到底想說啥49F 02/15 01:37
噓 : 太貴50F 02/15 03:31
推 : 問AI就是XOVR ETF,XAI,OPENAI,機器人AV 都在。51F 02/15 04:55
推 : AVI 生成陳菊.avi52F 02/15 08:11
推 : 搜尋引擎的色色查詢一直都名列前茅 需求龐大53F 02/15 08:52
--
※ 看板: Stock 文章推薦值: 0 目前人氣: 0 累積人氣: 21
作者 eeaa151 的最新發文:
- 任天堂今日發布「台灣任天堂股份有限公司」設立公告,宣布為進一步強化在台灣市場的業務基礎,並提升顧客的服務品質,將成立全新的在地法人 「台灣任天堂股份有限公司」作為任天堂株式會社的子公司之一,並於 2 …108F 65推 1噓
- 標題:Open AI模型規範更新 來源:Open AI 推特 網址: 內文:We’re sharing a major update to the Model Spec, a document wh …53F 32推 5噓
- 7F 4推
- 28F 11推
- 先說結論 本金: 1,000,000+信貸600,000(5.7%根本瘋子) 已實現損益599,888(諧音挺好) 在死存錢了10幾年後今年1月開始進入股市,老實說前面根本什麼都看不懂聽不懂,本著先 …71F 40推
點此顯示更多發文記錄
→
guest
回列表(←)
分享