David Hoxie David Hoxie

Gradients: Motion and Learning

It all begins with an idea.

How does a machine actually "learn"? Before we can build complex simulations of planets or analyze thermodynamic systems, we need to understand the fundamental physics of a single guess. 

In this lecture, we'll start from first principles, using the simple, intuitive game of "Hot or Cold" to build a physical model for gradient descent. We'll explore how changing a single parameter—the "learning rate"—can mean the difference between finding a solution and getting stuck in a chaotic loop.

This is the first step in our journey to derive deep learning from the ground up, using the language of physics, not just computer science.

In the next video: We'll take these 1D principles and expand them into a 2D universe, using orbital mechanics to build a creative drawing tool from scratch. 

Eventually we will discuss the physical intuition behind motion, entropy and thermodynamics. Feel free to look ahead, where we discuss more physics explanations of the gradient descent algorithm that provided the basis for providing an ‘energy landscape’ in learning models, in the post “Hooks Law and Perceptrons” This will be the underlying basis for the next lecture.

https://www.djhoxie.net/samplelectures/blog-hooks-law-and-learning


Connect & Support

  • Website: http://www.djhoxie.net

  • Google Scholar: https://scholar.google.com/citations?user=iwsajtAAAAAJ&hl=en

  • Patreon: (Coming Soon!)

  • arXiv Pre-prints: (Coming Soon!)

AI Collaboration Note: This video, its title, description, and the concepts explored within were developed in a deep collaboration with Google Gemini. Gemini's role included acting as a Socratic partner, a reviewer, and a tool for structuring and refining the final presentation.

Acknowledgments: Thank you to Daniel Shiffman (The Coding Train) for providing excellent conceptual starting points for the p5.js demonstrations.

References: 

[1] Imran, Muhammad, and Norah Almusharraf. "Google Gemini as a next generation AI educational tool: a review of emerging educational technology." Smart Learning Environments 11, no. 1 (2024): 22. 

[2] Marquardt, F., and Marquardt, F., 2021, "Machine learning and quantum devices," SciPost Physics Lecture Notes, p. 29. 

[3] Shannon, C. E., 1948, "A mathematical theory of communication," The Bell system technical journal, 27(3), pp. 379-423.

[4] Shiffman, D. (2024). The nature of code: simulating natural systems with javascript. No Starch Press.

[5] Landauer, Rolf. "Information is physical." Physics Today 44, no. 5 (1991): 23-29.


Read More
David Hoxie David Hoxie

Lecture 1: The Foundations of Insight

Video Description: This lecture demonstrates two things simultaneously: core concepts in physics, such as emergent behavior, and a new model for human-AI collaboration in scientific education, as suggested in recent literature [1]. It showcases a process of using an AI partner (Google Gemini) to achieve a deep, intuitive understanding of graduate-level physical concepts. This dynamic serves as a real-time example of a dialogic, collaborative learning model—in this case, a human-AI partnership—focused on generating insight through shared discovery.

Show Notes & Chapter Guide:

Act I: The Philosophical Introduction (00:00 - 27:10)

  • Authenticity and Engagement: The opening establishes a genuine, relatable tone, moving from personal anecdotes to foundational principles.

  • The Dice & Rubber Band Demos: These tactile analogies demonstrate emergent behavior, high-surprisal events, stability, and underlying forces.

  • Observation as Perturbation: A deep, philosophical argument that "no observation without perturbation" is fundamental to science, using analogies from black-body radiation to quantum mechanics.

  • Our Collaborative Intro [25:11 - 26:38]: A live, unscripted demonstration of the human-AI partnership model in action.

Act II: The Core Lesson - p5.js Sketches (27:11 - 42:27)

  • Debugging as Learning: A real-time walkthrough of chaotic vs. ordered flow fields, highlighting the crucial difference between programming intent and emergent behavior.

  • The "Trapped Charges" Analogy: A visually compelling explanation of how particles can collectively influence their own environment.

  • Mathematical Foundations: Connecting the emergent patterns back to the underlying "generative structure" of mathematical basis functions (e.g., Fourier series).

Act III: The Philosophical Conclusion (42:28 - End)

  • Complexity & High-Surprisal: Discussing the limits of analytical solutions (like the three-body problem) and how computational models help build intuition around high-surprisal events.

  • The Value of Foundational Models: Emphasizing that simple, conceptual simulations are crucial tools for learning and debugging, preventing costly errors in more complex research code.

Connect & Support:

  • Website: http://www.djhoxie.net

  • Google Scholar: https://scholar.google.com/citations?user=iwsajtAAAAAJ&hl=en

  • Patreon: (Coming Soon!)

  • arXiv Pre-prints: (Coming Soon!)

AI Collaboration Note: This video, its title, description, and the concepts explored within were developed in a deep collaboration with Google Gemini. Gemini's role included acting as a Socratic partner, a reviewer, and a tool for structuring and refining the final presentation.

Acknowledgments: Thank you to Daniel Shiffman (The Coding Train) for providing excellent conceptual starting points for the p5.js demonstrations.

References:

[1] Imran, Muhammad, and Norah Almusharraf. "Google Gemini as a next generation AI educational tool: a review of emerging educational technology." Smart Learning Environments 11, no. 1 (2024): 22.

[2] Schmidgall, S., Su, Y., Wang, Z., Sun, X., Wu, J., Yu, X., ... & Barsoum, E. (2025). Agent laboratory: Using llm agents as research assistants. arXiv preprint arXiv:2501.04227.

[3] Marquardt, F., and Marquardt, F., 2021, "Machine learning and quantum devices," SciPost Physics Lecture Notes, p. 29.

[4] Hinton, G. E., & Zemel, R. (1993). Autoencoders, minimum description length and Helmholtz free energy. Advances in neural information processing systems, 6.

[5] Shannon, C. E., 1948, "A mathematical theory of communication," The Bell system technical journal, 27(3), pp. 379-423.

[6] Hopfield, J. J. (2007). Hopfield network. Scholarpedia, 2(5), 1977.

[7] Wootters, W. K. (1981). Statistical distance and Hilbert space. Physical Review D, 23(2), 357.

[8] Leshno, M., Lin, V. Y., Pinkus, A., and Schocken, S., 1993, "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function," Neural Networks, 6(6), pp. 861-867.

[9] Konen, W. (2011). Self-configuration from a machine-learning perspective. arXiv preprint arXiv:1105.1951.

[10] Por, E., van Kooten, M., and Sarkovic, V., 2019, "Nyquist–Shannon sampling theorem," Leiden University, 1(1).

[11] Shiffman, D. (2024). The nature of code: simulating natural systems with javascript. No Starch Press.

[12] Landauer, Rolf. "Information is physical." Physics Today 44, no. 5 (1991): 23-29.

Read More
David Hoxie David Hoxie

Using a Private, Local AI as a Physics Research Assistant

It all begins with an idea.

Title: Using a Private, Local AI as a Physics Research Assistant

The Post

Researchers have begun exploring the use of Large Language Models as automated research assistants in many domains {Schmidgall et al., 2025}. Here, we explore a more specific application: utilizing a private, locally-run model as a direct collaborator in the field of optical physics. The debate around AI in research often focuses on public, cloud-based models. But for scientists concerned with intellectual property, privacy, and data control, the real revolution is happening locally. I've been incorporating a private, locally-run Large Language Model into my daily workflow, not as a simple chatbot, but as a true research collaborator. In this video, I'll give you a look inside that process.


A Researcher's Deep Dive: The First-Principles Justification

Modern physics research presents a fundamental conflict. High-precision experiments often demand controlled environments where all sources of noise—electromagnetic, vibrational, and acoustic—must be eliminated. This has historically forced a trade-off between experimental precision and real-time information access. Here, we explore a new solution: utilizing a locally-run, generative AI model on a silent, portable device. With modern MacBooks, iPads, and iPhones now running on the same powerful Apple Silicon platform, it's possible to deploy the same powerful AI model across a researcher's entire ecosystem. This provides access to a saved state of sparsely sampled data from the internet and literature without the physical or electronic noise of a live network connection.

This ability to operate from a "saved state" is made possible by techniques rooted in the physics of information. The principles of entropy and sparse sampling, seen in methods from Shannon [1] to Monte Carlo, have given us profound methods for data compression [2]. Modern machine learning models, particularly regenerative architectures like Transformers, are built on these same principles, such as minimizing Helmholtz free energy [3]. They are capable of learning from a sparsely sampled dataset and then reconstructing vast amounts of information from those compressed patterns.

This public, notebook-style research serves three primary goals. First, it aims to correct the record on common misinterpretations of AI behavior, such as "confabulations" [4]. Second, it is intended to be a learning space for other researchers, providing a transparent look into a process that will eventually translate into STEM education. Finally, we will deliberately explore this model outside of its typical boundary conditions. For a physicist, understanding how a system behaves at its limits is the most fundamental test of its validity, ensuring we are observing true extrapolation, not just memorization [5].

This leads to a crucial insight for the future of collaborative AI. Just as two human experts communicate, they can infer meaning and average out minor errors because they share a vast amount of correlated information. An AI should not be used as an expert source without a similar awareness that its conversational partner has their own knowledge and biases. True AI alignment may therefore require models that can adapt their certainty based on their user's expertise. The AI should know whether it's talking to a student or a professor, enabling it to say, "I'm not sure, this is questionable, let me check my resources"—as any good teacher or research collaborator does.

References

Schmidgall, S., Su, Y., Wang, Z., Sun, X., Wu, J., Yu, X., ... & Barsoum, E. (2025). Agent laboratory: Using llm agents as research assistants. arXiv preprint arXiv:2501.04227.

[1] Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.

[2] Brunton, S. L., & Kutz, J. N. (2022). Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press.

[3] Hinton, G. E., & Zemel, R. (1993). Autoencoders, minimum description length and Helmholtz free energy. Advances in Neural Information Processing Systems, 6.

[4] Moscovitch, M. (1995). Confabulation. In Schacter, D. L. (Ed.), Memory distortion: How minds, brains, and societies reconstruct the past (pp. 226-251). Harvard University Press.

[5] Perdue, G. N., et al. (2018). Reducing model bias in a deep learning classifier using domain adversarial neural networks in the MINERvA experiment. Journal of Instrumentation, 13(11), P11020.

Read More
David Hoxie David Hoxie

Blog Post Title Four

It all begins with an idea.

It all begins with an idea. Maybe you want to launch a business. Maybe you want to turn a hobby into something more. Or maybe you have a creative project to share with the world. Whatever it is, the way you tell your story online can make all the difference.

Don’t worry about sounding professional. Sound like you. There are over 1.5 billion websites out there, but your story is what’s going to separate this one from the rest. If you read the words back and don’t hear your own voice in your head, that’s a good sign you still have more work to do.

Be clear, be confident and don’t overthink it. The beauty of your story is that it’s going to continue to evolve and your site can evolve with it. Your goal should be to make it feel right for right now. Later will take care of itself. It always does.

Read More