asa kalavde aws 100x128 1

Learnings from COBOL modernization in the real world

There’s a lot of excitement right now about AI enabling mainframe application modernization. Boards are paying attention. CIOs are getting asked for a plan. AI is a genuine accelerator for COBOL modernization but to get results, AI needs additional context that source code alone can’t provide.Here’s what we’ve learned working with 400+ enterprise customers: mainframe …

PayPal’s historically large data migration is the foundation for its gen AI innovation

With the dawn of the gen AI era, businesses are facing unprecedented opportunities for transformative products, demanding a strategic shift in their technology infrastructure. A few years ago, PayPal, a digital-native company serving hundreds of millions of customers, faced a significant challenge. After 25 years of success in expanding services and capabilities, we’d created complexity …

Adaptive drafter model uses downtime to double LLM training speed

Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning. But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. …

CLIP is back on Anima, because CLIP is eternal.

You thought you can get away from it? Never. https://preview.redd.it/ucku0gzegqlg1.png?width=743&format=png&auto=webp&s=2f349550205028c6e18e4b72aa9144304d2c1e75 Guys at Yandex and Adobe implemented CLIP for bunch of models that don’t use it – https://github.com/quickjkee/modulation-guidance I made it into ComfyUI node for Anima – https://github.com/Anzhc/Anima-Mod-Guidance-ComfyUI-Node For images above and below i used CLIP L from here – https://huggingface.co/Anzhc/Noobai11-CLIP-L-and-BigG-Anime-Text-Encoders Basic CLIP L also works, …

Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates

Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to as circuits, that are responsible for performing specific tasks. Additionally, it has been shown that model performance improvement through fine-tuning often results from the strengthening of existing circuits in the model. Taken together, these findings suggest the possibility of intervening …

image 1 4

Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock

Organizations and individuals running multiple custom AI models, especially recent Mixture of Experts (MoE) model families, can face the challenge of paying for idle GPU capacity when the individual models don’t receive enough traffic to saturate a dedicated compute endpoint. To solve this problem, we have partnered with the vLLM community and developed an efficient …

1 ylYpswmmax 1000x1000 1

A developer’s guide to production-ready AI agents

Something has shifted in the developer community over the past year. AI agents have moved from “interesting research concept” to “thing my team is actually building.” The prototypes are working. The demos are impressive. And now comes the harder question: How do we ship this? That question turns out to be a multi-part one. Agents …