ChatGPT and generative A.I. are already changing the way we book trips and travel
ChatGPT Comes to Minecraft AI Mobs
Bard can now help code and create functions for Google Sheets
Generative AI coming for white collar roles…. who you will be when it takes your job?
Curing disease is the hot new field for AI talent
Google Research’s Brain and DeepMind merging
Google to deploy generative AI to create sophisticated ad campaigns
Some Neural Networks Learn Language Like Humans
First-ever AI fashion week debuts in NYC: ‘A new realm of creation’
HuggingFace provides millions of Wikipedia article embeddings dataset
Jaron Lanier calls for the end of AI Black Boxes and reveals data dignity to compensate human creators
Hyena Hierarchy: Towards Larger Convolutional Language Models
Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to […]
Stability AI ups the game…releases new open-source language model, StableLM
New Large Language and Vision Assistant (LLaVA) released and paired with VICUNA
Meta releases Dino V2 vision models with self-supervised learning
Scaling Transformer to 1M tokens and beyond with RMT
This technical report presents the application of a recurrent memory to extend the context length of BERT, one of the most effective Transformer-based models in natural language processing. By leveraging the Recurrent Memory Transformer architecture, we have successfully increased the model’s effective context length to an unprecedented two million tokens, while maintaining high memory retrieval […]
Is Drake faking his own AI song?
Google employees label AI chatbot Bard ‘worse than useless’ and ‘a pathological liar’
Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models
Latent Diffusion Models (LDMs) enable high-quality image synthesis while avoiding excessive compute demands by training a diffusion model in a compressed lower-dimensional latent space. Here, we apply the LDM paradigm to high-resolution video generation, a particularly resource-intensive task. We first pre-train an LDM on images only; then, we turn the image generator into a video […]