Skip to content

Commit f16efbc

Browse files
authored
Fixes to Ax MOO NAS tutorial (#2013)
- Add and link to correct logo. - Fix link to training script source - Fix some links to docs
1 parent ebce103 commit f16efbc

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

_static/img/ax_logo.png

77.4 KB
Loading

index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -510,7 +510,7 @@ What's new in PyTorch tutorials?
510510
.. customcarditem::
511511
:header: Multi-Objective Neural Architecture Search with Ax
512512
:card_description: Learn how to use Ax to search over architectures find optimal tradeoffs between accuracy and latency.
513-
:image: _static/img/ray-tune.png
513+
:image: _static/img/ax_logo.png
514514
:link: intermediate/ax_multiobjective_nas_tutorial.html
515515
:tags: Model-Optimization,Best-Practice,Ax,TorchX
516516

intermediate_source/ax_multiobjective_nas_tutorial.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
# -----------------------
4747
#
4848
# Our goal is to optimize the PyTorch Lightning training job defined in
49-
# `mnist_train_nas.py <https://github.com/pytorch/tutorials/tree/master/beginner_source/mnist_train_nas.py>`__.
49+
# `mnist_train_nas.py <https://github.com/pytorch/tutorials/tree/master/intermediate_source/mnist_train_nas.py>`__.
5050
# To do this using TorchX, we write a helper function that takes in
5151
# the values of the architcture and hyperparameters of the training
5252
# job and creates a `TorchX AppDef <https://pytorch.org/torchx/latest/basics.html>`__
@@ -103,7 +103,7 @@ def trainer(
103103
# Setting up the Runner
104104
# ---------------------
105105
#
106-
# Ax’s `Runner <https://ax.dev/api/core.html#module-ax.core.runner>`__
106+
# Ax’s `Runner <https://ax.dev/api/core.html#ax.core.runner.Runner>`__
107107
# abstraction allows writing interfaces to various backends.
108108
# Ax already comes with Runner for TorchX, and so we just need to
109109
# configure it. For the purpose of this tutorial we run jobs locally
@@ -228,7 +228,7 @@ def trainer(
228228
# fashion locally and write the results to the ``log_dir`` based on the trial
229229
# index (see the ``trainer()`` function above). We will define a metric
230230
# class that is aware of that logging directory. By subclassing
231-
# `TensorboardCurveMetric <https://ax.dev/tutorials/multiobjective_optimization.html>`__
231+
# `TensorboardCurveMetric <https://ax.dev/api/metrics.html?highlight=tensorboardcurvemetric#ax.metrics.tensorboard.TensorboardCurveMetric>`__
232232
# we get the logic to read and parse the Tensorboard logs for free.
233233
#
234234

@@ -314,7 +314,7 @@ def is_available_while_running(cls):
314314
# Creating the Ax Experiment
315315
# --------------------------
316316
#
317-
# In Ax, the `Experiment <https://ax.dev/api/core.html#module-ax.core.experiment>`__
317+
# In Ax, the `Experiment <https://ax.dev/api/core.html#ax.core.experiment.Experiment>`__
318318
# object is the object that stores all the information about the problem
319319
# setup.
320320
#
@@ -338,7 +338,7 @@ def is_available_while_running(cls):
338338
# Choosing the GenerationStrategy
339339
# -------------------------------
340340
#
341-
# A `GenerationStrategy <https://ax.dev/api/modelbridge.html#module-ax.modelbridge.generation_strategy>`__
341+
# A `GenerationStrategy <https://ax.dev/api/modelbridge.html#ax.modelbridge.generation_strategy.GenerationStrategy>`__
342342
# is the abstract representation of how we would like to perform the
343343
# optimization. While this can be customized (if you’d like to do so, see
344344
# `this tutorial <https://ax.dev/tutorials/generation_strategy.html>`__),

0 commit comments

Comments
 (0)