最佳化即是幾何,幾何即是推理:用數學終結Transformer的黑盒時代
<section><img alt="圖片" src="http://mmbiz.qpic.cn/mmbiz_gif/Psho9dm7oDHKVtfYDubjKdZRUjAfBQQicXjoZWJ3qnK42ooD4eeJUfJBM4SSZVa2RE5lO0j6rWwzliby0j9u4bDg/640?wx_fmt=gif&wxfrom=5&wx_lazy=1&tp=webp#imgIndex=0"></section><section></section><section><figure><h2><section><section><section><section><section><section><section><section><p><span><span>不是設計,而是進化。當交叉熵遇見 SGD,</span><span>貝氏推論成了唯一的數學必然。</span></span></p></section></section></section></section></section></section></section></section></h2></figure></section><section><span><img><img><img><img><img><img><img><img><img><img><img></span></section><p><span><span>長期以來,LLM 的推論能力被視為一種難以解釋的「湧現」。我們目睹了 Loss 的下降,卻難以透視參數空間內部發生了什麼。</span></span></p><p></p><p><span><span><span>近日,來自</span></span><b><span><span>哥倫比亞大學和 </span></span></b><b><span><span>Dream Sports</span></span></b><span><span> 的研究團隊發布了一組</span><span>三部曲</span><span>論文</span></span></span><span><span><span>。</span></span></span></p><p></p><p><span><span><span>這項工作並未止步於實驗觀察,而是建立了一個連接</span></span><b><span><span>最佳化目標</span><span> (Loss)</span></span></b><span><span>、</span></span><b><span><span>內部幾何</span><span><span> (</span><span>Geometry<i></i></span><span>)</span></span></span></b><span><span> 與</span></span><b><span><span>推論功能</span><span> (Inference)</span></span></b><span><span> 的完整物理圖景。</span></span></span></p><p></p><p><span><span><span>它講述了一個</span></span><span><span>關於 LLM 如何運作的完整故事</span></span><span><span>。其核心野心正如標題所言——</span><span>試圖用數學終結 Transformer 的黑盒時代。</span></span></span></p><p></p><p><span><span>他們證明了</span></span><span><span>:</span></span><b><span><span>Attention 機制並非某種近似的特徵提取器,而是在梯度下降的驅動下,自發演化出的一套精確的貝氏推論機。</span></span></b></p><section></section><section></section><section><span><img src="http://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDGhKg9nnSz5qQrwKvXibt3wulOVRfC18yCkd6xXqGq22h6QUk8chptF0fnQ4uXeZtAktYMrWwG2SyQ/640?wx_fmt=png&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=2" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section><p><span><span><span>理論錨點:交叉熵的貝氏終局</span></span></span></p><p><span><span><span>Transformer 的訓練通常基於最小化交叉熵損失。Paper I 首先澄清了這一最佳化過程的數學終局。</span></span></span></p><p></p><section><img src="http://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOFtleo4dibRXLz4vCbXqMEicPIG077hHxFzI24Rn8Uczjiaz1t7J2Bhnug/640?wx_fmt=png&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=26" alt="圖片"></section><section></section><p><span><span><span>論文標題:</span></span></span></p><p><span><span><span>The Bayesian Geometry of Transformer Attention</span></span></span></p><p><span><span><span>論文連結:</span></span></span></p><p><span><span><span>https://arxiv.org/abs/2512.22471</span></span></span></p><p></p><section><p><span><span>在無限數據與容量的極限下,最小化交叉熵 </span></span><span><span>:</span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOJicSggfbpMLbPngMUuIYf54PFzg1vBjsIw3Xvfq1pCh8HBBbr2gajcQ/640?wx_fmt=png&from=appmsg#imgIndex=27" alt="圖片"></section><section></section><section></section><section><p><span><span>其最優解 </span></span><span><span> 在數學上嚴格等價於</span><span>解析貝氏後驗預測分佈</span><span> (Bayesian Posterior Predictive Distribution):</span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOtuxia0y9ibLDEEupUbeqenWZ3qh85eX6eaLD9pIVykGr5I4QtfnBdOgw/640?wx_fmt=png&from=appmsg#imgIndex=28" alt="圖片"></section><section></section><p><span><span>為了驗證有限容量的 Transformer 是否真正逼近了這一極限,作者構建了</span><span>貝氏風洞 (Bayesian Wind Tunnels) </span><span>。</span></span></p><p></p><p><span><span>這是一個完全受控的數學環境,其中每一步的解析後驗都是精確已知的。</span></span></p><p></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOjQPIIoQiacdxLdIJ4vqpHy0et6FYZp2sLv16bzwXFvk2GicmoWXRsLlw/640?wx_fmt=png#imgIndex=29" alt="圖片"></section><section><span><span>〓</span><span> 圖1. </span></span><span><span>「貝氏風洞」概念圖。在缺乏 Ground Truth 的自然語言之外,作者構建了一個可精確測量的受控環境。</span></span></section></section></section><p><span><span><span>實驗結果表明,在雙射學習與 </span><span>HMM<i></i></span><span> 狀態追蹤任務中,Transformer 展現了極高的精度。</span></span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOtf2NPc68hDccm9Uyze4hicqyAAVMqEAeMutpIaIh8M6KmGaB6lzRB2A/640?wx_fmt=png&from=appmsg#imgIndex=30" alt="圖片"></section><section><section><section><span><span>〓</span><span> 圖2. </span></span><span><span>Transformer 的預測熵精確貼合理論貝氏後驗,平均絕對誤差(MAE)低至 10^{-3} 比特;相比之下,MLP 無法有效利用上下文進行假設消除。</span></span></section></section></section><p><span><span>更微觀的證據來自單序列分析,這是證明模型</span><span>真理解</span><span>而非</span><span>平均記憶</span><span>的鐵證:</span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOw781sibA5pQP8sHAwxhpKB2nkLibUxXe3qzdkibffEsLeWXibVJIYXMgAA/640?wx_fmt=png&from=appmsg#imgIndex=31" alt="圖片"></section><section><section><section><span><span>〓</span><span> 圖3.</span></span><span><span> 針對每一個具體序列,Transformer 的熵值(實線)能夠精確追蹤理論後驗(虛線)的鋸齒狀變化,證明模型在進行逐 Token 的即時推論。</span></span></section></section></section><p><span><span>而在 HMM 任務中,模型甚至展現出了完美的</span><span>長度外推 (Length Generalization) </span><span>能力,證明其學會了通用的遞迴演算法:</span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOkw9rnpU3yS1VlMOicESOpjKpYpwXicVCkbzGunibicZmgQPIMlCSo15ykw/640?wx_fmt=png&from=appmsg#imgIndex=32" alt="圖片"></section><section><section><section><span><span>〓</span><span> 圖4.</span></span><span><span>模型在訓練長度 K=20 內完美擬合。在測試長度 K=30 和 K=50 時,誤差平滑增長,未出現斷崖式下跌,證明模型並未死記硬背。</span></span></section></section></section><section></section><section><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDGhKg9nnSz5qQrwKvXibt3wuhfgUpIfdPSqH8YjjHbCUiaaKsMA36bIMsMtGNKoBcus5py06M0fvx3A/640?wx_fmt=png#imgIndex=4" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section><p><span><span><span>幾何表徵:推論的三階段演化</span></span></span></p><p><span><span><span>探針實驗進一步揭示了 Transformer 內部如何實現這一推論過程。作者將其描述為一個三階段的幾何演化機制。</span></span></span></p><p><span><span><span>1. 假設框架構建 (Layer 0)</span></span></span></p><p><span><span><span>推論始於坐標系的建立。第 0 層的 Key 向量形成了一個 近似正交的基底 (Orthogonal Basis),將所有可能的假設映射到獨立的幾何子空間中。</span></span></span></p><section><span><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOoRuicJv9LmmkWDfenqnMvAe4TfUDdplOeC80ia1e8WzibAHZJekvKkzTg/640?wx_fmt=png&from=appmsg#imgIndex=47" alt="圖片"><img><img><img><img><img><img></span></section><section><section><section><span><span>〓</span><span> 圖5.</span></span><span><span>Layer 0 的 Key 向量餘弦相似度矩陣。非對角元素接近 0,表明模型構建了正交的假設空間框架。</span></span><span><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section></section></section><p><span><span><span>2. 漸進式假設消除 (Middle Layers)</span></span></span></p><p><span><span><span>隨著層數加深,Attention 的</span><span>路由 (Routing) </span><span>功能逐漸顯現。Query 和 Key 的對齊程度呈現顯著的</span><span>銳化 (Sharpening) </span><span>趨勢。</span></span></span></p><p></p><p><span><span><span>這一過程在數學上等價於貝氏更新中似然函數的乘法操作,逐層抑制與當前證據不符的假設。</span></span></span></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOgiaxEJUSbgFMiczJ8m5JyameIN9uuAHqm6EPI3XWKDJnX0NW6hic7kySA/640?wx_fmt=png&from=appmsg#imgIndex=67" alt="圖片"></section><section><section><section><span><span>〓</span><span> 圖6.</span></span><span><span>從 Layer 0(左)的發散關注到 Layer 5(右)的高度聚焦,展示了模型對錯誤假設的逐步剔除。</span></span></section></section></section><p><span><span><span>3. 熵有序流形 (Late Layers)</span></span></span></p><section><p><span><span>當路由結構穩定後,Value 向量 (</span></span><span><span>) 在表示空間中並未坍縮為離散點,而是展開成一條光滑的</span><span>一維流形 (1D Manifold)</span><span>。</span></span></p><p><span><span>該流形的參數化坐標精確對應於</span><span>後驗熵 (Posterior Entropy)</span><span>。</span></span></p></section><section><span><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOP8tM2pujf41j7VryEZUVMjTGhuAqibKlZG0geMcJO0wMqWXo6w5emsw/640?wx_fmt=png&from=appmsg#imgIndex=68" alt="圖片"><img><img><img><img><img></span></section><section><section><section><span><span>〓</span><span> 圖7.</span></span><span><span>訓練後期,Value 向量的 PCA 投影形成了一條平滑曲線,低熵(高置信度)狀態與高熵狀態在幾何上有序排列。</span></span></section></section></section><section></section><section><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDGhKg9nnSz5qQrwKvXibt3wukOjHSmSsEuRCB0fJu69CtdNgLnvFPDUCgeicOppBKuDvniaD3q8XWQ0Q/640?wx_fmt=png#imgIndex=8" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section><p><span><span><span>動力學溯源:梯度下降的誘導機制</span></span></span></p><p><span><span><span>為何標準的梯度下降能夠自發產生上述幾何結構?</span></span></span></p><p></p><p><span><span><span>Paper II 透過全套一階梯度動力學推導,發現交叉熵損失誘導了一套精妙的</span><span>正回饋機制</span><span>。</span></span></span></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOVNrm3icPUP7mibYDMx9QqZzG0ianc5qbITbLqzSicvqnl6YhllBQRBAG5Q/640?wx_fmt=png&from=appmsg#imgIndex=88" alt="圖片"></section><section></section><p><span><span><span>論文標題:</span></span></span></p><p><span><span><span>Gradient Dynamics of Attention: How Cross-Entropy Sculpts Bayesian Manifolds</span></span></span></p><p><span><span><span>論文連結:</span></span></span></p><p><span><span><span>https://arxiv.org/abs/</span></span><span><span>2512.22473</span><img><img><img><img><img><img><img><img><img><img><img><img><img></span></span></p><section><span><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section><p><span><span><span>1. </span></span><span><span>優勢路由法則 (E-step)</span></span></span></p><section><p><span><span>Attention Score (</span></span><span><span>) 的梯度遵循以下公式:</span></span></p></section><section><img src="http://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicO206oAetzPFMG7INGib6ut9uYdSKJWMZ8krMiaWzC8DsDOO3Nj9NuKgNA/640?wx_fmt=png&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=115" alt="圖片"></section><section><p><span><span>其中 </span></span><span><span>。定義 </span><span>Advantage</span></span><span><span>。</span></span></p><p></p><p><span><span>物理含義:</span><span>這裡 </span></span><span><span> 代表誤差梯度方向。當 </span></span><span><span> 與誤差方向</span><span>相反</span><span>(即 </span></span><span><span> 越負,有助於減少 Loss)時,Advantage 為正。</span></span></p><p></p><p><span><span>結論:</span><span>梯度下降會</span><span>增加</span><span>那些能有效減少 Loss 的位置的注意力權重。</span></span></p></section><p><span><span><span>2. </span></span><span><span>責任加權更新法則 (M-step)</span></span></span></p><section><p><span><span>Value (</span></span><span><span>) 的更新遵循以下公式:</span></span></p><p></p><section><img src="http://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicO1mdKa6hC8sF32zmQKM9X3GI0x5jMqx3Of4elDnvOpdK0TYprv7CfkQ/640?wx_fmt=png&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=116" alt="圖片"></section><section></section><section><p><span><span>物理含義:</span><span>Value 向量會被拉向所有關注它的 Query 的</span><span>上游誤差信號</span><span> (</span></span><span><span>) 的加權平均方向,逐步演化為該簇 Query 的「原型」 (Prototype)。</span></span></p><p></p><section><img src="http://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOD0aIwa4BqJX5tmQLatctG4oibno6UYmsYiaF8OEicLWDGdPo0fTwbicYg/640?wx_fmt=png&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=117" alt="圖片"></section><section><section><section><span><span>〓</span><span> 圖8.</span></span><span><span>動力學幾何解釋</span></span></section><section><p><span><span>Value </span></span><span><span> 向誤差信號 </span></span><span><span> 移動,優化 Context </span></span><span><span>,進而增加相容性 </span></span><span><span>(使其更負),形成路由與內容的協同演化閉環。</span></span></p><p></p><p><span><span>這一動力學過程在結構上等價於隱式的 EM 演算法</span><span> (Expectation-Maximization)。Attention 權重充當 E 步的「軟責任」,而 Value 向量充當 M 步的「原型」。</span></span></p><p></p><p><span><span>這也解釋了</span><span>框架-精度解離</span><span> (Frame-Precision Dissociation) 現象。Attention 結構通常在訓練早期快速穩定,而 Value 內容則在剩餘訓練中持續在流形上精修。</span></span></p></section></section></section></section></section><ul></ul><p></p><p><span><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDGhKg9nnSz5qQrwKvXibt3wuiaLfO9V4lkD8cXK7ImEicqib5bPGH6syOrWzicR2KaqPyAicMccs8icC03Gw/640?wx_fmt=png#imgIndex=13" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></span></p><p><span><span><span>現實映射:從疊加態到思維鏈</span></span></span></p><p><span><span><span>雖然上述結論基於受控環境,但作者在博客 </span><span>[3] </span><span>中指出,在 </span><span>Pythia</span><span>, </span><span>Llama</span><span>, </span><span>Mistral</span><span> 等生產級模型中,同樣觀察到了類似的幾何特徵。</span></span></span></p><p></p><p><span><span><span>關鍵在於</span><span>疊加態 </span><span>(Superposition):在混合任務中,流形結構往往被高維雜訊掩蓋;但透過</span><span>領域限制</span><span> (Domain Restriction)(如僅關注數學任務),高維表徵會坍縮為清晰的熵有序流形 。</span></span></span></p><p></p><p></p><section><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmvQDasddbj0Phadjb1JPicOiayyyUoH7TQIb42x2qBWJtmWwic5WpjqiaK5EgTPtiaeNVqFxTA8j1rFNQ/640?wx_fmt=png#imgIndex=132" alt="圖片"></section><section><section><section><section><section><span><span>〓</span><span> 圖8.</span></span><span><span>概念圖展示了 Pythia、Llama 和 Mistral 內部在特定領域任務下湧現出的相似流形結構。</span></span></section></section></section></section></section><p><span><span><span>這一發現為 </span><span>思維鏈 (Chain-of-Thought, CoT) </span><span>提供了清晰的幾何解釋。</span></span></span></p><p></p><p><span><span><span>對於複雜推論任務,Transformer 面臨</span><span>層數耗盡</span><span> (Run out of layers) 的風險,無法在有限的計算步數內完成所有必要的假設消除。</span></span></span></p><p></p><p><span><span><span>CoT 本質上起到了</span><span>幾何延展器</span><span> (Geometric Extender) 的作用。</span></span></span></p><p></p><p><span><span><span>透過生成中間推論步驟,模型實際上獲得了更多的計算輪次,使其能夠沿著高置信度的</span><span>「熵有序流形」</span><span>進行一系列短距離、穩健的狀態轉移,從而避免了在低置信度區域進行長距離跳躍所引發的幻覺。</span></span></span></p><p></p><p></p><section><img alt="圖片" src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDGhKg9nnSz5qQrwKvXibt3wukGHdevfTibLOpic6945Lrhqmt43pKicyIhGs4m7ANzKOfY9RJgmTicZGdg/640?wx_fmt=png&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=16"></section><section><strong></strong></section><section><span><strong><span><span>結語</span></span></strong></span></section><p></p><p><span><span><span>這項研究提供了一個統一的視角來理解 Transformer 的智慧本質。</span><span>最佳化產生幾何,幾何產生推論</span><span> (Optimization gives rise to geometry. Geometry gives rise to inference.) 。</span></span></span></p><p></p><p><span><span><span>參數矩陣並非隨機的統計近似,而是梯度流在交叉熵勢能面上「雕刻」出的貝氏推論機。</span></span></span></p><p></p><p><span><span><span>Attention 機制從幾何動力學的角度來看,正是這一推論過程的物理載體。</span></span></span></p><p></p><section><section><section><section><section><section><section><svg></svg></section><section><section><section><section><section><section><section><img alt="圖片" src="https://mmbiz.qpic.cn/mmbiz_svg/lpHDr05YrIRDN0iaJZR6j0k3hdCoOJyQBOzmoSYv4hnzD3Da34ibPg7uafy0CadjWfaIqaC7RiblTducI8fAvBnUf8O4WvMXAml/640?wx_fmt=svg&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=8"></section></section></section></section><section><section><p><strong><span>參考文獻</span></strong></p></section></section><section><section><section><section><img alt="圖片" src="https://mmbiz.qpic.cn/mmbiz_svg/lpHDr05YrIRDN0iaJZR6j0k3hdCoOJyQBOzmoSYv4hnzD3Da34ibPg7uafy0CadjWfaIqaC7RiblTducI8fAvBnUf8O4WvMXAml/640?wx_fmt=svg&from=appmsg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=9"></section></section></section></section></section></section></section><section><svg></svg></section></section></section></section></section></section></section><section><span><span>[1] </span><span>Naman Aggarwal, Siddhartha R. Dalal, Vishal Misra. The Bayesian Geometry of Transformer Attention. <span>arXiv preprint arXiv:2512.22471</span> (2025). </span></span></section><section><span><span>[2] Naman Aggarwal, Siddhartha R. Dalal, Vishal Misra. Gradient Dynamics of Attention: How Cross-Entropy Sculpts Bayesian Manifolds. <span>arXiv preprint arXiv:2512.22473</span> (2025). </span></span></section><section><span><span>[3] Vishal Misra. Attention Is Bayesian Inference. Medium (Dec 2025). https://medium.com/@vishalmisra/attention-is-bayesian-inference-578c25db4501</span></span></section><p></p><section><section><section><section><svg></svg></section><section><p><strong><span>更多閱讀</span></strong></p></section><section><svg></svg></section></section></section></section><section><a href="https://mp.weixin.qq.com/s?__biz=MzIwMTc4ODE0Mw==&mid=2247715048&idx=1&sn=4d77441dd965f7a80b4f2492d315f4be&scene=21#wechat_redirect" target="_blank" rel="noopener noreferrer"><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDETRUPLvibIrbN7887TkTc4ovcxcVb6SkW8kc5rdV5yIn1RbYtMQdIyNBicjepybX8Sezhva6kibgEyQ/640?wx_fmt=png&from=appmsg#imgIndex=14" alt="圖片"></span></a></section><section><a href="https://mp.weixin.qq.com/s?__biz=MzIwMTc4ODE0Mw==&mid=2247714831&idx=2&sn=d5973381d8e4db08286dbc3f41314751&scene=21#wechat_redirect" target="_blank" rel="noopener noreferrer"><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDETRUPLvibIrbN7887TkTc4o8iaM2MYeu2KLtia5McwTfNHYHxfjTq8FsCibmQLu8XPuT8NKHz2pwFOfA/640?wx_fmt=png&from=appmsg#imgIndex=15" alt="圖片"></span></a></section><section><a href="https://mp.weixin.qq.com/s?__biz=MzIwMTc4ODE0Mw==&mid=2247714906&idx=1&sn=88f461e412fee53526da9f10e611d8ba&scene=21#wechat_redirect" target="_blank" rel="noopener noreferrer"><span><img src="https://mmbiz.qpic.cn/mmbiz_png/Psho9dm7oDETRUPLvibIrbN7887TkTc4oncBBtTBtU8ial2JQ3XrsGQgiab0FzvEHibaU9PiaSuPrbBqxGNUkUL4fYg/640?wx_fmt=png&from=appmsg#imgIndex=16" alt="圖片"></span></a></section><section></section><section><section><section><section></section></section></section></section><section><section><section><section><section><span><img src="https://mmbiz.qpic.cn/mmbiz_gif/Psho9dm7oDHHMXQ2IicFvJwssWxgWhKuK7ulQVyw7gPTxZia00vCxia2vzhRH6pGq8t1FN1zY48ibULAEZpic41k6eg/640?wx_fmt=gif#imgIndex=17" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section></section><section><p><span><strong><span>#</span><span><span>投 稿 通 道</span></span><span>#</span></strong></span></p></section><section><section><p><strong><span><span> 讓你的文字被更多人看到 </span></span></strong></p></section></section><section><p></p></section><section><section><section><svg></svg></section></section></section><section><p></p></section><section><p><span><span>如何才能讓更多的優質內容以更短路徑到達讀者群體,縮短讀者尋找優質內容的成本呢?</span><strong><span>答案就是:你不認識的人。</span></strong></span></p><p></p><p><span><span>總有一些你不認識的人,知道你想知道的東西。PaperWeekly 或許可以成為一座橋樑,促使不同背景、不同方向的學者和學術靈感相互碰撞,迸發出更多的可能性。 </span></span></p><p></p><p><span><span>PaperWeekly 鼓勵高校實驗室或個人,在我們的平台上分享各類優質內容,可以是</span><strong><span>最新論文解讀</span></strong><span>,也可以是</span><strong><span>學術熱點剖析</span></strong><span>、</span><strong><span>科研心得</span></strong><span>或</span><strong><span>競賽經驗講解</span></strong><span>等。我們的目的只有一個,讓知識真正流動起來。</span></span></p><p></p><p><span><span>📝 </span><span><strong><span>稿件基本要求:</span></strong></span></span></p><p><span><span>• 文章確系個人</span><strong><span>原創作品</span></strong><span>,未曾在公開管道發表,如為其他平台已發表或待發表的文章,請明確標註 </span></span></p><p><span><span>• 稿件建議以 </span><strong><span>markdown</span></strong><span> 格式撰寫,文中配圖以附件形式發送,要求圖片清晰,無版權問題</span></span></p><p><span><span>• PaperWeekly 尊重原作者署名權,並將為每篇被採納的原創首發稿件,提供</span><strong><span>業界具有競爭力稿酬</span></strong><span>,具體依據文章閱讀量和文章質量階梯制結算</span></span></p><p></p><p><span><span>📬 </span><span><strong><span>投稿通道:</span></strong></span></span></p><p><span><span>• 投稿信箱:</span></span><span><span>hr@paperweekly.site </span></span></p><p><span><span>• 來稿請備註即時聯繫方式(微信),以便我們在稿件選用的第一時間聯繫作者</span></span></p><p><span><span>• 您也可以直接新增小編微信(</span><strong><span>pwbot02</span></strong><span>)快速投稿,備註:姓名-投稿</span></span></p><p></p><section><span><img src="https://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgmic1CRCSOKfDibC3dZ4BaJuYyYTWJyw8gFxqon34STk3icf9aJbY4rqMpmhNjTGJXIGGFsCdTBHy3Tw/640?wx_fmt=png#imgIndex=18" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section><p><strong><span><span>△長按新增PaperWeekly小編</span></span></strong></p><p></p></section></section></section></section><section></section><p><span><span>🔍</span></span></p><p></p><p><span><span>現在,在</span><strong><span>「知乎」</span></strong><span>也能找到我們了</span></span></p><p><span><span>進入知乎首頁搜尋</span><strong><span>「PaperWeekly」</span></strong></span></p><p><span><span>點擊</span><strong><span>「關注」</span></strong><span>訂閱我們的專欄吧</span></span></p><p></p><p></p><section><span>·</span></section><section><section><section><section><section><section><section><section><span><mp-common-profile></mp-common-profile><img><img><img><img><img><img><img><img><img><img><img><img><img></span></section></section></section><section><section><svg></svg></section></section></section></section></section></section></section><p></p><p><span><span><img src="http://mmbiz.qpic.cn/mmbiz_png/VBcD02jFhgnZ3nlEAOI3MyTd7jqeD6cq8uTbkM2xZNpribyNr9liaPJ722zaHxd0YpQvib2nxOYmWibydCVY7W94ew/640?wx_fmt=jpeg&tp=webp&wxfrom=5&wx_lazy=1#imgIndex=21" alt="圖片"><img><img><img><img><img><img><img><img><img><img><img><img><img></span></span></p><p><mp-style-type></mp-style-type></p>