Pelayo Arbués

Recent Notes

  • Power and Prediction

    Apr 30, 2025

  • Why Software Engineers Should Learn a Bit of Data Science

    Apr 01, 2025

  • A recommender beast

    Feb 05, 2025

See 90 more →

Home

❯

Literature Notes

❯

Articles

❯

Distilling Step by Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes

Apr 16, 20251 min read

  • articles
  • literature-note

rw-book-cover

Metadata

  • Author: readwise.io
  • Full Title: Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
  • URL: https://readwise.io/reader/document_raw_content/51090821

Highlights

  • Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes (View Highlight)

Graph View

  • Metadata
  • Highlights

Now Reading

  • Additive Thinking: A Human Habit

    May 14, 2025

See 1349 more →

Created with Quartz, © 2025

  • Bluesky
  • Linkedin
  • Mastodon
  • Twitter
  • Unsplash
  • GitHub
  • RSS