Skip to content

Update homepage ticker for 1.12 #1062

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 28, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _news/news-item-1.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
order: 1
link: https://pytorch.org/blog/pytorch-1.11-released
summary: NEW! PyTorch 1.11, TorchData, and functorch are now available
link: https://pytorch.org/blog/pytorch-1.12-released/
summary: "New release! PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser are now available!"
---
4 changes: 2 additions & 2 deletions _news/news-item-2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
order: 2
link: https://pytorch.org/blog/pytorch-1.11-new-library-releases
summary: NEW! Introducing TorchRec, and other domain library updates in PyTorch 1.11
link: https://pytorch.org/blog/pytorch-1.12-new-library-releases/
summary: New libraries updates in PyTorch 1.12
---
5 changes: 0 additions & 5 deletions _news/news-item-3.md

This file was deleted.

15 changes: 11 additions & 4 deletions docs/master/_sources/generated/torch.cuda.stream.rst.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
torch.cuda.stream
=================

.. role:: hidden
:class: hidden-section
.. currentmodule:: torch.cuda

.. autofunction:: stream

Stream
======

.. autoclass:: Stream
:inherited-members:
:members:

.. autogenerated from source/_templates/autosummary/class.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
torch.nn.utils.prune.identity
=============================

.. role:: hidden
:class: hidden-section
.. currentmodule:: torch.nn.utils.prune

.. autofunction:: identity

Identity
========

.. autoclass:: Identity
:inherited-members:
:members:

.. autogenerated from source/_templates/autosummary/class.rst
126 changes: 105 additions & 21 deletions docs/master/generated/torch.cuda.stream.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>torch.cuda.stream &mdash; PyTorch master documentation</title>
<title>Stream &mdash; PyTorch master documentation</title>






<link rel="canonical" href="https://pytorch.org/docs/stable/generated/torch.cuda.stream.html"/>
<link rel="canonical" href="https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html"/>



Expand All @@ -41,8 +41,8 @@
<link rel="stylesheet" href="../_static/css/jit.css" type="text/css" />
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="torch.cuda.synchronize" href="torch.cuda.synchronize.html" />
<link rel="prev" title="torch.cuda.set_sync_debug_mode" href="torch.cuda.set_sync_debug_mode.html" />
<link rel="next" title="ExternalStream" href="torch.cuda.ExternalStream.html" />
<link rel="prev" title="torch.cuda.comm.gather" href="torch.cuda.comm.gather.html" />

<!--
Search engines should not index the master version of documentation.
Expand Down Expand Up @@ -242,7 +242,7 @@


<div>
<a style="color:#F05732" href="https://pytorch.org/docs/stable/generated/torch.cuda.stream.html">
<a style="color:#F05732" href="https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html">
You are viewing unstable developer preview docs.
Click here to view docs for latest stable release.
</a>
Expand Down Expand Up @@ -397,13 +397,13 @@

<li><a href="../cuda.html">torch.cuda</a> &gt;</li>

<li>torch.cuda.stream</li>
<li>Stream</li>


<li class="pytorch-breadcrumbs-aside">


<a href="../_sources/generated/torch.cuda.stream.rst.txt" rel="nofollow"><img src="../_static/images/view-page-source-icon.svg"></a>
<a href="../_sources/generated/torch.cuda.Stream.rst.txt" rel="nofollow"><img src="../_static/images/view-page-source-icon.svg"></a>


</li>
Expand All @@ -429,21 +429,105 @@
<div role="main" class="main-content" itemscope="itemscope" itemtype="http://schema.org/Article">
<article itemprop="articleBody" id="pytorch-article" class="pytorch-article">

<section id="torch-cuda-stream">
<h1>torch.cuda.stream<a class="headerlink" href="#torch-cuda-stream" title="Permalink to this heading">¶</a></h1>
<dl class="py function">
<dt class="sig sig-object py" id="torch.cuda.stream">
<span class="sig-prename descclassname"><span class="pre">torch.cuda.</span></span><span class="sig-name descname"><span class="pre">stream</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">stream</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda.html#stream"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.stream" title="Permalink to this definition">¶</a></dt>
<dd><p>Wrapper around the Context-manager StreamContext that
selects a given stream.</p>
<section id="stream">
<h1>Stream<a class="headerlink" href="#stream" title="Permalink to this heading">¶</a></h1>
<dl class="py class">
<dt class="sig sig-object py" id="torch.cuda.Stream">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torch.cuda.</span></span><span class="sig-name descname"><span class="pre">Stream</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">device</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">priority</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">0</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream" title="Permalink to this definition">¶</a></dt>
<dd><p>Wrapper around a CUDA stream.</p>
<p>A CUDA stream is a linear sequence of execution that belongs to a specific
device, independent from other streams. See <a class="reference internal" href="../notes/cuda.html#cuda-semantics"><span class="std std-ref">CUDA semantics</span></a> for
details.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>stream</strong> (<a class="reference internal" href="torch.cuda.Stream.html#torch.cuda.Stream" title="torch.cuda.Stream"><em>Stream</em></a>) – selected stream. This manager is a no-op if it’s
<code class="docutils literal notranslate"><span class="pre">None</span></code>.</p>
<dd class="field-odd"><ul class="simple">
<li><p><strong>device</strong> (<a class="reference internal" href="../tensor_attributes.html#torch.device" title="torch.device"><em>torch.device</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.10)"><em>int</em></a><em>, </em><em>optional</em>) – a device on which to allocate
the stream. If <a class="reference internal" href="torch.cuda.device.html#torch.cuda.device" title="torch.cuda.device"><code class="xref py py-attr docutils literal notranslate"><span class="pre">device</span></code></a> is <code class="docutils literal notranslate"><span class="pre">None</span></code> (default) or a negative
integer, this will use the current device.</p></li>
<li><p><strong>priority</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.10)"><em>int</em></a><em>, </em><em>optional</em>) – priority of the stream. Can be either
-1 (high priority) or 0 (low priority). By default, streams have
priority 0.</p></li>
</ul>
</dd>
</dl>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Although CUDA versions &gt;= 11 support more than two levels of
priorities, in PyTorch, we only support two levels of priorities.</p>
</div>
<dl class="py method">
<dt class="sig sig-object py" id="torch.cuda.Stream.query">
<span class="sig-name descname"><span class="pre">query</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream.query"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream.query" title="Permalink to this definition">¶</a></dt>
<dd><p>Checks if all the work submitted has been completed.</p>
<dl class="field-list simple">
<dt class="field-odd">Returns<span class="colon">:</span></dt>
<dd class="field-odd"><p>A boolean indicating if all kernels in this stream are completed.</p>
</dd>
</dl>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="torch.cuda.Stream.record_event">
<span class="sig-name descname"><span class="pre">record_event</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">event</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream.record_event"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream.record_event" title="Permalink to this definition">¶</a></dt>
<dd><p>Records an event.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>event</strong> (<a class="reference internal" href="torch.cuda.Event.html#torch.cuda.Event" title="torch.cuda.Event"><em>torch.cuda.Event</em></a><em>, </em><em>optional</em>) – event to record. If not given, a new one
will be allocated.</p>
</dd>
<dt class="field-even">Returns<span class="colon">:</span></dt>
<dd class="field-even"><p>Recorded event.</p>
</dd>
</dl>
<p>..Note:: In eager mode stream is of type Stream class while in JIT it is
an object of the custom class <code class="docutils literal notranslate"><span class="pre">torch.classes.cuda.Stream</span></code>.</p>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="torch.cuda.Stream.synchronize">
<span class="sig-name descname"><span class="pre">synchronize</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream.synchronize"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream.synchronize" title="Permalink to this definition">¶</a></dt>
<dd><p>Wait for all the kernels in this stream to complete.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This is a wrapper around <code class="docutils literal notranslate"><span class="pre">cudaStreamSynchronize()</span></code>: see
<a class="reference external" href="https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html">CUDA Stream documentation</a> for more info.</p>
</div>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="torch.cuda.Stream.wait_event">
<span class="sig-name descname"><span class="pre">wait_event</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">event</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream.wait_event"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream.wait_event" title="Permalink to this definition">¶</a></dt>
<dd><p>Makes all future work submitted to the stream wait for an event.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>event</strong> (<a class="reference internal" href="torch.cuda.Event.html#torch.cuda.Event" title="torch.cuda.Event"><em>torch.cuda.Event</em></a>) – an event to wait for.</p>
</dd>
</dl>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This is a wrapper around <code class="docutils literal notranslate"><span class="pre">cudaStreamWaitEvent()</span></code>: see
<a class="reference external" href="https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html">CUDA Stream documentation</a> for more info.</p>
<p>This function returns without waiting for <code class="xref py py-attr docutils literal notranslate"><span class="pre">event</span></code>: only future
operations are affected.</p>
</div>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="torch.cuda.Stream.wait_stream">
<span class="sig-name descname"><span class="pre">wait_stream</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">stream</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torch/cuda/streams.html#Stream.wait_stream"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.cuda.Stream.wait_stream" title="Permalink to this definition">¶</a></dt>
<dd><p>Synchronizes with another stream.</p>
<p>All future work submitted to this stream will wait until all kernels
submitted to a given stream at the time of call complete.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>stream</strong> (<a class="reference internal" href="#torch.cuda.Stream" title="torch.cuda.Stream"><em>Stream</em></a>) – a stream to synchronize.</p>
</dd>
</dl>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This function returns without waiting for currently enqueued
kernels in <a class="reference internal" href="torch.cuda.stream.html#torch.cuda.stream" title="torch.cuda.stream"><code class="xref py py-attr docutils literal notranslate"><span class="pre">stream</span></code></a>: only future operations are affected.</p>
</div>
</dd></dl>

</dd></dl>

</section>
Expand All @@ -456,10 +540,10 @@ <h1>torch.cuda.stream<a class="headerlink" href="#torch-cuda-stream" title="Perm

<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">

<a href="torch.cuda.synchronize.html" class="btn btn-neutral float-right" title="torch.cuda.synchronize" accesskey="n" rel="next">Next <img src="../_static/images/chevron-right-orange.svg" class="next-page"></a>
<a href="torch.cuda.ExternalStream.html" class="btn btn-neutral float-right" title="ExternalStream" accesskey="n" rel="next">Next <img src="../_static/images/chevron-right-orange.svg" class="next-page"></a>


<a href="torch.cuda.set_sync_debug_mode.html" class="btn btn-neutral" title="torch.cuda.set_sync_debug_mode" accesskey="p" rel="prev"><img src="../_static/images/chevron-right-orange.svg" class="previous-page"> Previous</a>
<a href="torch.cuda.comm.gather.html" class="btn btn-neutral" title="torch.cuda.comm.gather" accesskey="p" rel="prev"><img src="../_static/images/chevron-right-orange.svg" class="previous-page"> Previous</a>

</div>

Expand Down Expand Up @@ -491,7 +575,7 @@ <h1>torch.cuda.stream<a class="headerlink" href="#torch-cuda-stream" title="Perm
<div class="pytorch-right-menu" id="pytorch-right-menu">
<div class="pytorch-side-scroll" id="pytorch-side-scroll-right">
<ul>
<li><a class="reference internal" href="#">torch.cuda.stream</a></li>
<li><a class="reference internal" href="#">Stream</a></li>
</ul>

</div>
Expand Down
Loading