サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
GWの過ごし方
github.com/NVIDIA
NVIDIA NemoClaw is an open source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime, part of NVIDIA Agent Toolkit, which provides additional security for running autonomous agents. It also includes open source models such as NVIDIA Nemotron. Alpha software NemoClaw is available in early preview starting March 16, 2026. This
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert
This project implements the well known multi GPU Jacobi solver with different multi GPU Programming Models: single_threaded_copy Single Threaded using cudaMemcpy for inter GPU communication multi_threaded_copy Multi Threaded with OpenMP using cudaMemcpy for inter GPU communication multi_threaded_copy_overlap Multi Threaded with OpenMP using cudaMemcpy for inter GPU communication with overlapping c
LATEST RELEASE / DEVELOPMENT VERSION: The main branch tracks the latest released beta version: 0.15.0. For the latest development version, checkout the develop branch. DISCLAIMER: The beta release is undergoing active development and may be subject to changes and improvements, which could cause instability and unexpected behavior. We currently do not recommend deploying this beta version in a prod
github.com/NVIDIA-Merlin
NVTabular is a feature engineering and preprocessing library for tabular data that is designed to easily manipulate terabyte scale datasets and train deep learning (DL) based recommender systems. It provides high-level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS Dask-cuDF library. NVTabular is a component of NVIDIA Merlin, an open source framework for build
github.com/NVIDIAGameWorks
github.com/NVIDIA-NeMo
2026-03: Nemotron 3 VoiceChat is now released in Early Access. Built on the Nemotron Nano v2 LLM backbone with Nemotron speech and TTS decoder, VoiceChat delivers full-duplex, natural, interruptible conversations with low latency. Try out the demo and apply for early access. 2026-03: Nemotron-Speech-Streaming v2603 has been updated. It has been trained on a larger and more diverse corpus, resultin
github.com/NVIDIA-AI-IOT
6/01/2021 - JetPack 4.5.1 based image is updated. It is pre-configured for JetRacer. Detail here. JetRacer is an autonomous AI racecar using NVIDIA Jetson Nano. With JetRacer you will Go fast - Optimize for high framerates to move at high speeds Have fun - Follow examples and program interactively from your web browser By building and experimenting with JetRacer you will create fast AI pipelines a
JetCard is a system configuration that makes it easy to get started with AI. It comes pre-loaded with A Jupyter Lab server that starts on boot for easy web programming A script to display the Jetson's IP address (and other stats) The popular deep learning frameworks PyTorch and TensorFlow After configuring your system using JetCard, you can get started prototyping AI projects from your web browser
The NVIDIA Data Loading Library (DALI) is a GPU-accelerated library for data loading and pre-processing to accelerate deep learning applications. It provides a collection of highly optimized building blocks for loading and processing image, video and audio data. It can be used as a portable drop-in replacement for built in data loaders and data iterators in popular deep learning frameworks. Deep l
nv-wavenet is a CUDA reference implementation of autoregressive WaveNet inference. In particular, it implements the WaveNet variant described by Deep Voice. nv-wavenet only implements the autoregressive portion of the network; conditioning vectors must be provided externally. More details about the implementation and performance can be found on the NVIDIA Developer Blog. Channel counts are provide
CUTLASS 4.4.0 - Feb 2026 CUTLASS is a collection of abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement. CUTLASS decomposes these "moving parts" into reusable, modular software components and abstractions. Primitives for different
次のページ
このページを最初にブックマークしてみませんか?
『NVIDIA Corporation』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く