Renderizando la cámara con Metal en iOS (AVFoundation + MetalKit)
Renderizado de vídeo de cámara con Metal sin AVCaptureVideoPreviewLayer En este tutorial vamos a renderizar el video de la cámara directamente en pantalla usan...
Renderizado de vídeo de cámara con Metal sin AVCaptureVideoPreviewLayer En este tutorial vamos a renderizar el video de la cámara directamente en pantalla usan...
The core challenge for streaming video generation is maintaining the content consistency in long context, which poses high requirement for the memory design. Mo...
This paper does not introduce a novel method but instead establishes a straightforward, incremental, yet essential baseline for video temporal grounding (VTG), ...
Non-parametric quantization has received much attention due to its efficiency on parameters and scalability to a large codebook. In this paper, we present a uni...
We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction reli...
Recent advancements in 3D generative modeling have significantly improved the generation realism, yet the field is still hampered by existing representations, w...
Video foundation models generate visually realistic and temporally coherent content, but their reliability as world simulators depends on whether they capture p...
We propose VASA-3D, an audio-driven, single-shot 3D head avatar generator. This research tackles two major challenges: capturing the subtle expression details p...
We introduce ART, Articulated Reconstruction Transformer -- a category-agnostic, feed-forward model that reconstructs complete 3D articulated objects from only ...
Achieving truly adaptive embodied intelligence requires agents that learn not just by imitating static demonstrations, but by continuously improving through env...
Visual Sentiment Analysis (VSA) is a challenging task due to the vast diversity of emotionally salient images and the inherent difficulty of acquiring sufficien...
Timely and accurate lymphoma diagnosis is essential for guiding cancer treatment. Standard diagnostic practice combines hematoxylin and eosin (HE)-stained whole...
This paper introduces JMMMU-Pro, an image-based Japanese Multi-discipline Multimodal Understanding Benchmark, and Vibe Benchmark Construction, a scalable constr...
Article URL: https://alpr.watch/ Comments URL: https://news.ycombinator.com/item?id=46290916 Points: 224 Comments: 114...
Fresh off releasing the latest version of its Olmo foundation model, the Allen Institute for AI Ai2 launched its open-source video model, Molmo 2, on Tuesday, a...
AlphaFlow provides a smoother training schedule for MeanFlow image models, reducing the conflict between its two objectives and accelerating learning. Overview...
Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the...
Modern neural architectures for 3D point cloud processing contain both convolutional layers and attention blocks, but the best way to assemble them remains uncl...
The quality of the latent space in visual tokenizers (e.g., VAEs) is crucial for modern generative models. However, the standard reconstruction-based training p...
We present Recurrent Video Masked-Autoencoders (RVM): a novel video representation learning approach that uses a transformer-based recurrent neural network to a...
Generalization remains the central challenge for interactive 3D scene generation. Existing learning-based approaches ground spatial understanding in limited sce...
Recent feed-forward reconstruction models like VGGT and π^3 achieve impressive reconstruction quality but cannot process streaming videos due to quadratic memor...
Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications,...
In this paper, we present JoVA, a unified framework for joint video-audio generation. Despite recent encouraging advances, existing methods face two critical li...
We introduce Interactive Intelligence, a novel paradigm of digital human that is capable of personality-aligned expression, adaptive interaction, and self-evolu...
Textual Inversion (TI) is an efficient approach to text-to-image personalization but often fails on complex prompts. We trace these failures to embedding norm i...
Dexterous manipulation is challenging because it requires understanding how subtle hand motion influences the environment through contact with objects. We intro...
The validation and verification of artificial intelligence (AI) models through robustness assessment are essential to guarantee the reliable performance of inte...
We introduce the Do-Undo task and benchmark to address a critical gap in vision-language models: understanding and generating physically plausible scene transfo...
Recent deep learning frameworks in histopathology, particularly multiple instance learning (MIL) combined with pathology foundational models (PFMs), have shown ...
Real ones will know that Mount Rainier looks too big in this image, but the re-creation of a Washington State ferry in this AI image is uncanny. This is The Ste...
AI Surveillance on British Roads On a grey morning along the A38 near Plymouth, a white van equipped with twin cameras captures thousands of images per hour, i...
Introduction An AI background remover can feel almost magical when it works well—and frustrating when it doesn’t. The difference usually comes down to two thin...
The recent success of 3D Gaussian Splatting (3DGS) has reshaped novel view synthesis by enabling fast optimization and real-time rendering of high-quality radia...
Large-scale video generation models have shown remarkable potential in modeling photorealistic appearance and lighting interactions in real-world scenes. Howeve...
We present Particulate, a feed-forward approach that, given a single static 3D mesh of an everyday object, directly infers all attributes of the underlying arti...
The collection of large-scale and diverse robot demonstrations remains a major bottleneck for imitation learning, as real-world data acquisition is costly and s...
Reality is a dance between rigid constraints and deformable structures. For video models, that means generating motion that preserves fidelity as well as struct...
Accurately quantifying vitiligo extent in routine clinical photographs is crucial for longitudinal monitoring of treatment response. We propose a trustworthy, f...
Video matting remains limited by the scale and realism of existing datasets. While leveraging segmentation data can enhance semantic stability, the lack of effe...
Model fingerprint detection techniques have emerged as a promising approach for attributing AI-generated images to their source models, but their robustness und...
Generating realistic synthetic microscopy images is critical for training deep learning models in label-scarce environments, such as cell counting with many cel...
Visual generation grounded in Visual Foundation Model (VFM) representations offers a highly promising unified pathway for integrating visual understanding, perc...
Reliable interpretation of multimodal data in dentistry is essential for automated oral healthcare, yet current multimodal large language models (MLLMs) struggl...
Key frame selection in video understanding presents significant challenges. Traditional top-K selection methods, which score frames independently, often fail to...
The growing demand for real-time DNN applications on edge devices necessitates faster inference of increasingly complex models. Although many devices include sp...
We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without expl...
Generative world models are reshaping embodied AI, enabling agents to synthesize realistic 4D driving environments that look convincing but often fail physicall...