Enough of unconstrained generative AI outputs. It often leads to wrong or harmful outputs. How to make LLM safe? Time for constraints in models. How can constraints help? — — No unintended outputs — — By controlling models, we control the right outputs — — The regularization of gen AI can be done by governmentsContinue reading “We need constraints in Generative AI Models Now”
Category Archives: Deep Learning
Unconstrained LLMs, How Long? Safe, secure, and trusted constraint-based LLMs
Note: This is a duplicate copy of original fine with DOI:10.13140/RG.2.2.15511.02722. The pdf can be accessed at, (PDF) Unconstrained LLMs, How Long? Safe, secure, and trusted constraint-based LLMs Abstract: Most LLMs are unconstrained optimization problems (UMP). The problems of hallucinations and dangerous images and texts produced by LLMs persist. How can this problem be solved? WhyContinue reading “Unconstrained LLMs, How Long? Safe, secure, and trusted constraint-based LLMs”
Memory Learning, Model Titans (Behrouz et al, 2024)
“Titans: Learning to memorize at test time. arXiv preprint arXiv:2501.00663” This paper (Behrouz et al, 2024) [1] aims to build a memory learning model that is efficient, uses less memory, and learns and predicts more. Neural learning is the basis of memory-based learning. Forgetting is important too, and too much forgetting can lose essential data about the input.Continue reading “Memory Learning, Model Titans (Behrouz et al, 2024)”
Nested Learning: Part II
#ai #llm #deeplearning #ml Here is the video lecture, subscribe for updates, https://www.youtube.com/watch?v=qDwAm1f0JBY Some concerns, it catches the forgetting behavior of LLMs, the context once gone can be looked into nested memories and hence can be revised if nested parameters and memories are referenced. The pre training can’t be edited but the computation of temporaryContinue reading “Nested Learning: Part II”
Nested Learning- Part I-Associated Memory and Momentum
Here we discuss one of the latest paper in 2025 in AI from Google Research given in [1] reference. The paper is titled, “Nested learning: The illusion of deep learning architectures” and is published in “Neural Information Processing Systems” . The contributors are: Behrouz, A., Razaviyayn, M., Zhong, P., & Mirrokni, V. The Authors suggest thatContinue reading “Nested Learning- Part I-Associated Memory and Momentum”
Can Animals learn from AI? Do we need AI to do it for our pets?
Imagine your pets at home when you are outside in the office? What if you can see them? There are three kinds of learning: supervised, unsupervised, and semi-supervised. AI can teach in any of these modes, if pets are receptive to it. What if an Intelligent Pet AI App on your TV can guide yourContinue reading “Can Animals learn from AI? Do we need AI to do it for our pets?”
Predicting the Insulin dose Intelligently for perusal by humans- Missing Part-Upload Meal Image for Calories!
#ai #health Note: This method may have some fuzziness assigned to it, but it’s still not foolproof and needs to be validated. Therefore, it’s not fully reliable. Always seek an opinion on your insulin dose before taking it, or stay within a safe dosage range. Some of the past works in this area use: ThereContinue reading “Predicting the Insulin dose Intelligently for perusal by humans- Missing Part-Upload Meal Image for Calories!”
RAG Engineering-based Query Specific Summarization
Here, I present the idea behind Query-Specific Summarization and RAG, and how it can be implemented. Use case. And how to test it. Retrieval Augmented Generation is very well used in organizations with their own data. Many organizations have their own data that they do not want to upload to generic servers where a genericContinue reading “RAG Engineering-based Query Specific Summarization”
OpenAI to acquire model training firm Neptune
It also acquired Statsig, Software Applications Incorporated, and Jony Ive’s AI devices startup in the past months. What Neptune does! What is special about Neptune? OpenAI uses Neptune already. With a record of 30,000 projects tracked, Neptune is trusted by 60,000 researchers and 1500 commercial teams. It started in 2017 with Deepsense.ai, and came outContinue reading “OpenAI to acquire model training firm Neptune”
Harmonics Proves a Tough Mathematics Problem. Should we name AI as a contributor or a Co-author in research works?
#ai #mathematics Erdős Problem #124OPEN This is open, and cannot be resolved with a finite computation. http://www.erdosproblems.com Paul Erdős proposed numerous open problems. Erdős Problem #124OPEN This is open, and cannot be resolved with a finite computation., http://www.erdosproblems.com Aristotle by Harmonic verified with Lean proved the above Erdos problem, known as problem number 124. HarmonicContinue reading “Harmonics Proves a Tough Mathematics Problem. Should we name AI as a contributor or a Co-author in research works?”