死死团
精华
|
战斗力 鹅
|
回帖 0
注册时间 2023-7-3
|
本帖最后由 yorktown 于 2025-2-10 19:19 编辑
整体思路是:①Deepseek根据指定的材料生成Json格式的概述 ②用python-pptx库直接转化成PPT ③手动调整一下PPT的样式和格式
材料都有的话五分钟就能搞定一个10页左右的PPT,适用于业务学习、组会汇报等场景
理论上优化一下prompt和代码应该能有更好的效果,不过懒得试了,权当抛砖引玉吧。另外试了一下本地部署的32B蒸馏版本,也可以,不过最好把第一步拆成读材料+转成json格式两步
Prompt:
- 请总结附件文件中的要点内容,以Json格式生成一份PPT大纲,用于{指定用途}。以下是供你参考的Json模板,注意如果需要换行应当使用'\\n'而非‘\n':
- ```
- [
- {"title": "Slide 1", "content": "This is the first line of first slide. \\n This is the second line of first slide"},
- {"title": "Slide 2", "content": "This is the second slide."}
- ]
- ```
复制代码
Python 代码:- from pptx import Presentation
- import json
- # Sample JSON data
- json_data = '''
- [
- {"title": "Slide 1: Title & Authors", "content": "Attention Is All You Need (Vaswani et al., 2017)\\nAshish Vaswani, Noam Shazeer, Niki Parmar, et al.\\nGoogle Brain & University of Toronto"},
- {"title": "Slide 2: Core Innovation", "content": "Proposed Transformer Architecture\\n• Replaces RNN/CNN with self-attention mechanisms\\n• Enables parallelization and faster training\\n• Achieves SOTA in machine translation tasks"},
- {"title": "Slide 3: Model Architecture", "content": "Encoder-Decoder Structure:\\n- Encoder: 6 layers with multi-head self-attention + FFN\\n- Decoder: 6 layers with masked self-attention + cross-attention\\nKey Components:\\n• Scaled dot-product attention\\n• Positional encoding (sinusoidal functions)\\n• Residual connections + LayerNorm"},
- {"title": "Slide 4: Attention Mechanisms", "content": "Scaled Dot-Product Attention:\\n• Input: Query, Key, Value matrices\\n• Formula: softmax(QKᵀ/√dₖ)V\\nMulti-Head Attention:\\n• Parallel attention heads (h=8)\\n• Captures diverse dependency patterns"},
- {"title": "Slide 5: Experimental Results", "content": "Machine Translation (WMT 2014):\\n• EN-DE: 28.4 BLEU (2.0 improvement)\\n• EN-FR: 41.8 BLEU (3.5 days training)\\nKey Advantages:\\n• 12x faster training vs. RNN/CNN models\\n• Generalizes to parsing tasks (91.3 F1 on WSJ)"},
- {"title": "Slide 6: Conclusion & Impact", "content": "Transformer's Strengths:\\n• Eliminates sequential computation\\n• Superior performance and scalability\\nFuture Directions:\\n• Extend to multimodal tasks (image/audio)\\n• Explore sparse attention for long sequences"}
- ]
- '''
- # Load JSON data
- slides = json.loads(json_data)
- # Create a presentation object
- prs = Presentation()
- # Add slides to the presentation
- for slide_data in slides:
- slide_layout = prs.slide_layouts[1] # Use layout 1 for title and content
- slide = prs.slides.add_slide(slide_layout)
- title = slide.shapes.title
- content = slide.placeholders[1]
-
- title.text = slide_data["title"]
- content.text = slide_data["content"]
- # Save the presentation
- prs.save('Python-PPTX.pptx')
复制代码 |
|