The AI Report - #2

3 min read Original article ↗

Hello and welcome to the second episode of the AI Report. We aim to inform you with trends in the world of AI— from research to products to everyday life — and throw light on the latest trends. Please subscribe for actionable insights and share this newsletter on social media.

📝 [Paper] Unlimiformer: Long-Range Transformers with Unlimited Length Input

Unlimiformer is a novel approach that wraps existing transformer models to handle unlimited input lengths, by offloading attention computation to a k-nearest-neighbor index, which can be stored on GPU or CPU memory. It enables attention heads in decoder layers to retrieve top-k keys rather than attending to every key, allowing for extremely long input sequences. The model shows improvement over BART and Longformer in long-document and multi-document summarization tasks, even with 350k token-long inputs, without any input truncation or additional learned weights.

This is quite a welcome news to many of us who want very long context windows.

Read the paper here

📝 [Paper & Video] Real-Time Neural Appearance Models

Nvidia introduced a system for real-time rendering of complex scenes that uses learned hierarchical textures interpreted by neural decoders, which generates reflectance values and importance-sampled directions. The system supports anisotropic sampling and level-of-detail rendering, baking deeply layered material graphs into a compact neural representation. They demonstrate that with hardware-accelerated tensor operations, neural decoders can be efficiently executed in real-time path tracing, significantly improving performance and enabling film-quality visuals for real-time applications like games and live previews.

Source: Nvidia

Read more here

See the impressive video demo here.

📝 [Paper] Self-Alignment

Title: Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision

This is an exciting new paper which builds on LLaMA but more crucially, doesn’t distill data from ChatGPT, like what Alpaca did and doesn’t rely on Human feedback (as much).

self_align_pipeline.png

Read the paper here

Repository

[News] Google I/O - all about AI

Highlights:

  1. Google expands Bard to 180 countries

  2. PaLM 2 was announced which comes in various sizes (Gecko, Otter, Bison and Unicorn) and powers 25 Google products.

  3. Google rethinks Search. Google is trying to change the format from searching to conversing where the AI can handle summarizing results from a variety of sources.

[Repo] PrivateGPT

Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!

Read about the repo here

Thanks for reading! Subscribe for free to receive new posts on latest AI research, tools and news.

Discussion about this post

Ready for more?