Skip to content

Split utils#306

Open
VsevolodX wants to merge 47 commits into
mainfrom
feature/SOF-7894-1
Open

Split utils#306
VsevolodX wants to merge 47 commits into
mainfrom
feature/SOF-7894-1

Conversation

@VsevolodX
Copy link
Copy Markdown
Member

@VsevolodX VsevolodX commented May 13, 2026

  • update: ase and pymatgen defects in emfs
  • update: separate json display from made
  • update: use made profile
  • update: use api profile
  • chore: update pyproject
  • update: add utils core
  • update: add utils jl
  • update: add utils other

Summary by CodeRabbit

  • Documentation

    • Updated settings file path references in README files and notebooks.
  • Chores

    • Reorganized package structure from mat3ra-api-examples to mat3ra-notebooks-utils.
    • Consolidated utility imports and module paths across 100+ notebooks.
    • Updated JupyterLite and Pyodide dependency configurations.
    • Removed legacy webinar example scripts and demonstrations.

Review Change Stack

@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new mat3ra.notebooks_utils package (env/auth/IO/plot/JupyterLite), replaces legacy utils across many notebooks/scripts, introduces interruptible polling, and updates job/material helpers and plotting. Includes unit tests for interrupts.

Changes

Utilities package, packaging, and notebook rewiring

Layer / File(s) Summary
Core env/logger/settings, IO dispatch, JupyterLite URL/UI/Packages, plotting backends
src/py/mat3ra/notebooks_utils/*, src/py/mat3ra/notebooks_utils/jupyterlite/*
Adds enums/environment/logger/settings; unified IO for Python/Pyodide; JupyterLite URL/UI/packages helpers; Plotly/Matplotlib renderers; dataframe/ui helpers; and high-level UI selector.
Auth device flow, async job polling, material helpers, visualize/plot refactor
src/py/mat3ra/notebooks_utils/auth.py, .../python/other/api/*, .../python/other/material/*
Implements OIDC device flow; central authenticate; async job polling; bulk-material resolver; refactors plotting/visualize utilities; removes deprecated helpers.
Workflow notebooks rewiring (band_gap, surface_energy, passivate_slab)
other/materials_designer/workflows/*, other/materials_designer/passivate_slab.ipynb
Rewires install/auth/helpers and visualization to mat3ra.notebooks_utils across workflow notebooks.
Materials Designer and specific example notebooks rewiring
other/materials_designer/*
Migrates designer/specific examples to wheel-based install and new jupyterlite/materials/visualization helpers.
Jobs notebooks and Materials Project examples rewiring
examples/job/*, other/materialsproject/*
Switches job notebooks and Materials Project scripts to new install paths and helpers.
Interrupts unit tests
tests/py/unit/test_jupyterlite_interrupts.py
Adds tests for interruptible polling and abort handling.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • pranabdas
  • timurbazhirov

Poem

A rabbit taps keys in the Lite of the moon,
Wheels roll in, imports retune.
Utils hop nests, plots dance anew,
Jobs wait politely, interrupt on cue.
JSONs sparkle, materials align—
Mat3ra’s burrow, now sleek by design. 🐇✨

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/SOF-7894-1

"source": [
"from utils.settings import ENDPOINT_ARGS, ACCOUNT_ID\n",
"from utils.visualize import display_JSON\n",
"from utils.io import display_JSON\n",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be imported from mat3ra notebooks utilization now

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb (1)

274-274: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Update import path: old module path no longer exists

Line 274 imports from mat3ra.utils.jupyterlite.plot, but this module path has been refactored and no longer exists. Update to mat3ra.notebooks.utils.other.jupyterlite.plot to fix the broken import. Multiple other files in the codebase (including src/py/mat3ra/notebooks/utils/other/plot.py, auth.py, and api.py) also still reference the old path and will need similar updates.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb`
at line 274, Update the broken import path: replace imports that reference
mat3ra.utils.jupyterlite.plot with the new module path
mat3ra.notebooks.utils.other.jupyterlite.plot so calls to plot_2d_heatmap and
plot_3d_surface resolve; search for other occurrences (e.g., in files that
import plot.py, auth.py, api.py) and update their import statements to the new
path so the functions plot_2d_heatmap and plot_3d_surface are imported from
mat3ra.notebooks.utils.other.jupyterlite.plot.
🧹 Nitpick comments (7)
other/materials_designer/create_adatom_defect.ipynb (1)

234-234: 💤 Low value

Consolidate duplicate imports.

The visualize import on this line duplicates the one on line 144. While notebooks sometimes repeat imports for cell independence, consolidating them reduces redundancy.

♻️ Proposed refactor

Remove this duplicate import and rely on the import from the earlier cell (line 144).

-from mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize
-
 visualize([
     {
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_adatom_defect.ipynb` at line 234, Remove the
duplicate import of visualize_materials (imported as visualize) from
mat3ra.notebooks.utils.other.visualize in this cell; rely on the earlier import
already present in the notebook (the one that defined visualize previously) to
avoid redundancy—delete the line containing "from
mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize"
and keep the original import only.
other/jarvis/run_job_using_material_from_jarvis_db.ipynb (1)

442-442: 💤 Low value

Consolidate duplicate imports.

The imports on this line duplicate those already imported on lines 250-251. While notebooks sometimes repeat imports for cell independence, consolidating them at the top improves clarity.

♻️ Proposed refactor

Remove this duplicate import statement and rely on the imports from the earlier cell (lines 249-251).

-from mat3ra.notebooks.utils.other.api import wait_for_jobs_to_finish_async, get_property_by_subworkflow_and_unit_indicies
 job_id = job["_id"]
 await wait_for_jobs_to_finish_async(job_endpoints, [job_id])
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/jarvis/run_job_using_material_from_jarvis_db.ipynb` at line 442, Remove
the duplicate import statement that re-imports wait_for_jobs_to_finish_async and
get_property_by_subworkflow_and_unit_indicies and rely on the single
consolidated import already present earlier in the notebook; locate the
redundant import line that reads "from mat3ra.notebooks.utils.other.api import
wait_for_jobs_to_finish_async, get_property_by_subworkflow_and_unit_indicies",
delete it, and ensure any cells that depend on those functions still run (they
will use the original import).
other/materials_designer/create_perturbation.ipynb (1)

165-165: 💤 Low value

Redundant import: visualize already imported at line 115.

This import duplicates the one at line 115. Consider removing to keep imports at the top of their respective sections.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_perturbation.ipynb` at line 165, Remove the
redundant import of visualize from mat3ra.notebooks.utils.other.visualize (the
duplicate of the earlier import of visualize) so only the single import remains;
locate the duplicate import statement that imports "visualize_materials as
visualize" and delete it, keeping the original import at the top (the earlier
import of visualize) to avoid duplication.
other/materials_designer/create_nanoribbon.ipynb (1)

168-168: 💤 Low value

Redundant import: visualize already imported at line 117.

This import duplicates the one at line 117. Consider removing to keep imports at the top of their respective sections.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_nanoribbon.ipynb` at line 168, Remove the
redundant import of visualize by deleting the duplicate line "from
mat3ra.notebooks.utils.other.visualize import visualize as visualize" in this
cell and rely on the existing import earlier in the notebook (the original
visualize import), so only a single import of the visualize symbol remains at
the top of the imports section.
other/materials_designer/create_point_defect.ipynb (1)

214-214: 💤 Low value

Redundant import: visualize already imported at line 135.

This import duplicates the one at line 135. Consider removing to keep imports at the top of their respective sections.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_point_defect.ipynb` at line 214, Remove the
redundant import of visualize ("from mat3ra.notebooks.utils.other.visualize
import visualize") so only the single canonical import of visualize remains at
the top of the notebook; locate the duplicate import statement and delete it,
leaving the original import (the one already importing visualize) intact to
avoid duplicate definitions.
pyproject.toml (1)

97-97: ⚡ Quick win

Set Ruff target version to py310 to match the declared Python baseline.

requires-python is >=3.10 and Black targets py310, but Ruff is set to py38. This mismatch can produce inconsistent linting behavior for Python 3.10+ syntax features.

Update line 113 to target-version = "py310".

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pyproject.toml` at line 97, The Ruff configuration's target-version key
currently mismatches the project Python baseline (it is set to py38), causing
inconsistent linting for py310 features; update the target-version setting (the
target-version config key in pyproject.toml) to "py310" (replace the existing
py38 value) so Ruff aligns with requires-python and Black.
src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py (1)

14-19: ⚖️ Poor tradeoff

Module-level side effect affects global matplotlib state.

Line 19 calls configure_matplotlib_renderer() at import time, which modifies the global matplotlib backend. This side effect occurs whenever the module is imported, potentially affecting other code that expects a different backend.

Consider documenting this behavior prominently or moving the configuration call to an explicit initialization function that users must call.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py` around lines 14 -
19, The module currently invokes configure_matplotlib_renderer() at import time
which mutates global matplotlib state; remove the module-level call and instead
expose configure_matplotlib_renderer (or add an explicit init function like
init_matplotlib_renderer) so callers opt-in to changing the backend, and update
the module docstring to clearly document that callers must call
configure_matplotlib_renderer() (uses is_pyodide_environment() and
plt.switch_backend) when they want the pyodide-specific backend applied.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@config.yml`:
- Around line 8-13: The default packages_pyodide list is too minimal and many
notebooks (e.g., examples/job/run-simulations-and-extract-properties.ipynb,
examples/job/ml-train-model-predict-properties.ipynb,
examples/reproducing_publications/...) import pandas and will fail; update the
packages_pyodide block in config.yml to include the missing runtime deps (at
minimum "pandas" and any common dependencies like "numpy") or add per-notebook
config entries that specify the required packages for those notebooks so they
don't inherit the reduced default set.

In
`@examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`:
- Line 762: The notebook imports wait_for_jobs_to_finish but then awaits
wait_for_jobs_to_finish_async, causing a NameError; update the import to bring
in wait_for_jobs_to_finish_async from mat3ra.notebooks.utils.other.api (or
change the awaited call to wait_for_jobs_to_finish if you intended the sync
helper). Ensure the symbol names align between the import statement and the
awaited call (look for wait_for_jobs_to_finish and wait_for_jobs_to_finish_async
in the cell around the import and the await).

In
`@examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`:
- Line 749: The import and call names mismatch: the notebook imports
wait_for_jobs_to_finish but later awaits wait_for_jobs_to_finish_async; update
the import to pull wait_for_jobs_to_finish_async from
mat3ra.notebooks.utils.other.api (or alternatively change the awaited call to
wait_for_jobs_to_finish) so the imported symbol matches the awaited function
name (adjust any usages to the chosen name accordingly).

In `@other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb`:
- Line 16: The markdown link target "made" in the sentence containing "[Create
Interface with Min Strain ZSL](made)" is invalid; update the link target to the
actual notebook filename, e.g., replace the target "made" with
"create_interface_with_min_strain_zsl.ipynb" so the link becomes "[Create
Interface with Min Strain ZSL](create_interface_with_min_strain_zsl.ipynb)";
ensure the link text "[Create Interface with Min Strain ZSL]" remains unchanged
and verify the referenced notebook file exists.

In `@other/materialsproject/api_interoperability_showcase.py`:
- Around line 49-51: The call to get_ipython().system(...) should be guarded so
it won’t raise NameError or lint errors when IPython is unavailable; import
get_ipython from IPython and call it into a variable (e.g., ip = get_ipython()),
then only call ip.system(...) if ip is not None and the existing environment
check ("COLAB_JUPYTER_IP" in os.environ) passes; update the code around
get_ipython() to use from IPython import get_ipython and the ip is not None
guard instead of calling get_ipython() directly.

In `@src/py/mat3ra/notebooks/utils/core/json.py`:
- Around line 40-47: The current code injects json_str directly into an inline
<script>, enabling script-injection; instead, stop interpolating raw JSON into
JS by placing the JSON payload into a non-executing container (e.g., a <script
type="application/json" id="{id}-data"> or a data- attribute) using a safe JSON
serializer (e.g., Python's json.dumps on the object with ensure_ascii=True) and
then in the inline script read and parse that container via
document.getElementById("{id}-data").textContent and JSON.parse() before passing
the resulting object to renderjson; update the code that builds the HTML/JS
(referencing json_str, renderjson, document.getElementById and id) to implement
this pattern so no unescaped JSON is directly injected into executable script
context.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/interrupts.py`:
- Around line 164-169: The loop can make remaining_seconds slightly negative
because it always subtracts check_interval_seconds; change the logic in the loop
that uses remaining_seconds, poll_interval_seconds and check_interval_seconds
(and which checks broadcast_channel_abort_controller and raises UserAbortError)
so you subtract the actual sleep duration instead of always subtracting
check_interval_seconds — e.g., compute slept = min(check_interval_seconds,
remaining_seconds), await asyncio.sleep(slept), then do remaining_seconds -=
slept (or clamp remaining_seconds = max(0, remaining_seconds -
check_interval_seconds)) to avoid negative remaining_seconds and the extra tiny
sleep.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/io.py`:
- Around line 5-31: The function write_to_file is declared async but contains no
awaits, forcing callers to await unnecessarily; change its signature from async
def write_to_file(...) to a normal def write_to_file(...) (and adjust any
imports/usage sites to call it synchronously if they currently await it),
leaving the internal logic and return value unchanged; ensure references to
write_to_file across the codebase are updated to remove awaits where present.
- Around line 5-18: The docstring for write_to_file incorrectly claims
Pyodide-specific behavior that isn’t implemented; update the docstring to
reflect the actual implementation by removing the
`/drive/<folder-name>/<file-name>` statement and instead state that
write_to_file writes to the provided file_name path (in both Python and Pyodide
environments if applicable), accepts file_content types
(str/bytes/io.StringIO/io.BytesIO), uses the mode parameter, and returns the
saved file path; reference the write_to_file function name so the docstring
change is applied directly to that function.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/logger.py`:
- Around line 23-29: The code reads VERBOSE from caller_globals or os.environ
with os.environ.get("VERBOSE", True) which treats any non-empty string (e.g.
"False") as truthy; change the logic in the block around
inspect.currentframe()/caller_frame to explicitly parse the environment value
into a boolean: fetch env_val = os.environ.get("VERBOSE") and if env_val is None
fall back to True, otherwise normalize env_val = env_val.strip().lower() and set
should_log based on env_val in a whitelist of truthy strings (e.g.
{"1","true","yes","on"}) so that "False","0","no" and "" are treated as False;
update the code that assigns should_log (referencing caller_globals, VERBOSE,
and the should_log variable) accordingly.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py`:
- Around line 194-201: The marker for optimal_point uses z_opt =
np.min(z_matrix) which is wrong; instead locate the z-value at (x_opt, y_opt) by
finding the indices in the meshgrid arrays x_matrix and y_matrix that match (or
are closest to) x_opt and y_opt (e.g. compute idx = argmin(|x_matrix - x_opt|)
and jdx = argmin(|y_matrix - y_opt|) or find where both equal) and then set
z_opt = z_matrix[idx, jdx] before calling fig.add_trace for the marker; update
the block that computes z_opt (referencing optimal_point, x_opt, y_opt,
x_matrix, y_matrix, z_matrix, and fig.add_trace) accordingly.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py`:
- Around line 27-45: The function read_from_url_pyodide currently lacks error
handling and its docstring return type is incorrect; update the docstring
Returns section to state "Union[str, bytes]" and wrap the pyfetch call and
subsequent response reads in a try/except that checks response.status (or
response.ok) and raises a descriptive exception (or returns a controlled error)
on non-2xx responses and also catches network errors from pyfetch to log/raise a
useful error; ensure the code still returns await response.bytes() when as_bytes
is True and await response.string() otherwise, and preserve the existing pyodide
import and async signature.
- Around line 48-65: The read_from_url function has an incomplete docstring and
may return None if ENVIRONMENT is neither EnvironmentsEnum.PYODIDE nor
EnvironmentsEnum.PYTHON; update the docstring to complete the description of the
as_bytes parameter and the return value, and add explicit handling for
unexpected ENVIRONMENT values (e.g., raise a ValueError or TypeError with a
clear message) so read_from_url always returns Union[str, bytes] or raises;
reference the ENVIRONMENT variable and EnvironmentsEnum, and ensure you call
read_from_url_pyodide(url, as_bytes) or read_from_url_python(url, as_bytes)
accordingly.
- Around line 7-24: read_from_url_python currently lets network exceptions
(e.g., URLError, HTTPError, socket.timeout) propagate; wrap the
urllib.request.urlopen(...) call in a try/except that catches
urllib.error.URLError, urllib.error.HTTPError and socket.timeout, and handle
them by raising a clear RuntimeError (or a custom exception) that includes the
original exception message for debugging; ensure you import urllib.error and
socket and preserve the as_bytes behavior when returning successful responses in
read_from_url_python.
- Around line 20-24: The urllib.request.urlopen call in the function handling
url (variables url and as_bytes) must restrict accepted URL schemes to only
"http" and "https" and specify a timeout to avoid SSRF and hanging; before
calling urllib.request.urlopen, parse url with urllib.parse.urlparse, validate
that parsed.scheme is "http" or "https" (raise a ValueError or custom exception
otherwise), and then call urllib.request.urlopen(url,
timeout=YOUR_TIMEOUT_SECONDS) instead of the current call, ensuring any
exceptions from urlopen are propagated or converted to a clear error for the
caller.

---

Outside diff comments:
In
`@other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb`:
- Line 274: Update the broken import path: replace imports that reference
mat3ra.utils.jupyterlite.plot with the new module path
mat3ra.notebooks.utils.other.jupyterlite.plot so calls to plot_2d_heatmap and
plot_3d_surface resolve; search for other occurrences (e.g., in files that
import plot.py, auth.py, api.py) and update their import statements to the new
path so the functions plot_2d_heatmap and plot_3d_surface are imported from
mat3ra.notebooks.utils.other.jupyterlite.plot.

---

Nitpick comments:
In `@other/jarvis/run_job_using_material_from_jarvis_db.ipynb`:
- Line 442: Remove the duplicate import statement that re-imports
wait_for_jobs_to_finish_async and get_property_by_subworkflow_and_unit_indicies
and rely on the single consolidated import already present earlier in the
notebook; locate the redundant import line that reads "from
mat3ra.notebooks.utils.other.api import wait_for_jobs_to_finish_async,
get_property_by_subworkflow_and_unit_indicies", delete it, and ensure any cells
that depend on those functions still run (they will use the original import).

In `@other/materials_designer/create_adatom_defect.ipynb`:
- Line 234: Remove the duplicate import of visualize_materials (imported as
visualize) from mat3ra.notebooks.utils.other.visualize in this cell; rely on the
earlier import already present in the notebook (the one that defined visualize
previously) to avoid redundancy—delete the line containing "from
mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize"
and keep the original import only.

In `@other/materials_designer/create_nanoribbon.ipynb`:
- Line 168: Remove the redundant import of visualize by deleting the duplicate
line "from mat3ra.notebooks.utils.other.visualize import visualize as visualize"
in this cell and rely on the existing import earlier in the notebook (the
original visualize import), so only a single import of the visualize symbol
remains at the top of the imports section.

In `@other/materials_designer/create_perturbation.ipynb`:
- Line 165: Remove the redundant import of visualize from
mat3ra.notebooks.utils.other.visualize (the duplicate of the earlier import of
visualize) so only the single import remains; locate the duplicate import
statement that imports "visualize_materials as visualize" and delete it, keeping
the original import at the top (the earlier import of visualize) to avoid
duplication.

In `@other/materials_designer/create_point_defect.ipynb`:
- Line 214: Remove the redundant import of visualize ("from
mat3ra.notebooks.utils.other.visualize import visualize") so only the single
canonical import of visualize remains at the top of the notebook; locate the
duplicate import statement and delete it, leaving the original import (the one
already importing visualize) intact to avoid duplicate definitions.

In `@pyproject.toml`:
- Line 97: The Ruff configuration's target-version key currently mismatches the
project Python baseline (it is set to py38), causing inconsistent linting for
py310 features; update the target-version setting (the target-version config key
in pyproject.toml) to "py310" (replace the existing py38 value) so Ruff aligns
with requires-python and Black.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py`:
- Around line 14-19: The module currently invokes
configure_matplotlib_renderer() at import time which mutates global matplotlib
state; remove the module-level call and instead expose
configure_matplotlib_renderer (or add an explicit init function like
init_matplotlib_renderer) so callers opt-in to changing the backend, and update
the module docstring to clearly document that callers must call
configure_matplotlib_renderer() (uses is_pyodide_environment() and
plt.switch_backend) when they want the pyodide-specific backend applied.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8743e8aa-d17e-4e36-894f-1297a9ded536

📥 Commits

Reviewing files that changed from the base of the PR and between 37836ca and 21cafa3.

📒 Files selected for processing (129)
  • README.ipynb
  • README.md
  • config.yml
  • examples/job/create_and_submit_job.ipynb
  • examples/job/get-file-from-job.ipynb
  • examples/job/ml-train-model-predict-properties.ipynb
  • examples/job/run-simulations-and-extract-properties.ipynb
  • examples/material/create_material.ipynb
  • examples/material/get_materials_by_formula.ipynb
  • examples/material/upload_materials_from_file_poscar.ipynb
  • examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/system/get_authentication_params.ipynb
  • examples/workflow/get_workflows.ipynb
  • examples/workflow/qe_scf_calculation.ipynb
  • other/experiments/jupyterlite/relax_structure_with_mace.ipynb
  • other/experiments/mtrsm/calculate_energy_for_vacancy_mattersim.ipynb
  • other/jarvis/run_job_using_material_from_jarvis_db.ipynb
  • other/materials_designer/create_adatom_defect.ipynb
  • other/materials_designer/create_cluster_custom_shape.ipynb
  • other/materials_designer/create_cluster_specific_shape.ipynb
  • other/materials_designer/create_cluster_sphere.ipynb
  • other/materials_designer/create_cutout_box.ipynb
  • other/materials_designer/create_cutout_custom_shape.ipynb
  • other/materials_designer/create_grain_boundary_crystal.ipynb
  • other/materials_designer/create_grain_boundary_film.ipynb
  • other/materials_designer/create_heterostructure_example.ipynb
  • other/materials_designer/create_interface_with_min_strain_zsl.ipynb
  • other/materials_designer/create_interface_with_no_strain_matching.ipynb
  • other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb
  • other/materials_designer/create_island_defect.ipynb
  • other/materials_designer/create_island_defect_custom.ipynb
  • other/materials_designer/create_maxwell_disorder.ipynb
  • other/materials_designer/create_monolayer.ipynb
  • other/materials_designer/create_nanoribbon.ipynb
  • other/materials_designer/create_nanowire.ipynb
  • other/materials_designer/create_nanowire_custom_shape.ipynb
  • other/materials_designer/create_perturbation.ipynb
  • other/materials_designer/create_perturbation_custom.ipynb
  • other/materials_designer/create_point_defect.ipynb
  • other/materials_designer/create_point_defect_pair.ipynb
  • other/materials_designer/create_slab.ipynb
  • other/materials_designer/create_supercell.ipynb
  • other/materials_designer/create_terrace_defect.ipynb
  • other/materials_designer/create_twisted_interface_with_commensurate_lattices.ipynb
  • other/materials_designer/create_twisted_interface_with_nanoribbons.ipynb
  • other/materials_designer/custom_transformation.ipynb
  • other/materials_designer/import_material_from_jarvis_db_entry.ipynb
  • other/materials_designer/import_materials_from_files.ipynb
  • other/materials_designer/import_materials_from_standata.ipynb
  • other/materials_designer/optimize_film_position.ipynb
  • other/materials_designer/passivate_edge.ipynb
  • other/materials_designer/passivate_slab.ipynb
  • other/materials_designer/specific_examples/custom_cutout_etched_silicon.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_2d_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_3d_fcc_metals_copper.ipynb
  • other/materials_designer/specific_examples/defect_point_adatom_island_molybdenum_disulfide_platinum.ipynb
  • other/materials_designer/specific_examples/defect_point_interstitial_tin_oxide.ipynb
  • other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene_simulation.ipynb
  • other/materials_designer/specific_examples/defect_point_vacancy_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb
  • other/materials_designer/specific_examples/defect_surface_island_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_step_platinum.ipynb
  • other/materials_designer/specific_examples/heterostructure_silicon_silicon_dioxide_hafnium_dioxide_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/interface_2d_2d_boron_nitride_graphene.ipynb
  • other/materials_designer/specific_examples/interface_2d_3d_graphene_silicon_dioxide.ipynb
  • other/materials_designer/specific_examples/interface_3d_3d_copper_cristobalite.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_commensurate_lattices_molybdenum_disulfide.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_nanoribbons_boron_nitride.ipynb
  • other/materials_designer/specific_examples/nanocluster_gold.ipynb
  • other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb
  • other/materials_designer/specific_examples/passivation_edge_nanowire_silicon.ipynb
  • other/materials_designer/specific_examples/passivation_surface_silicon.ipynb
  • other/materials_designer/specific_examples/perturbation_ripple_graphene.ipynb
  • other/materials_designer/specific_examples/slab_strontium_titanate.ipynb
  • other/materials_designer/under_the_hood.ipynb
  • other/materials_designer/workflows/band_gap.ipynb
  • other/materials_designer/workflows/band_structure.ipynb
  • other/materials_designer/workflows/band_structure_hse.ipynb
  • other/materials_designer/workflows/convergence.ipynb
  • other/materials_designer/workflows/equation_of_state.ipynb
  • other/materials_designer/workflows/relaxation.ipynb
  • other/materials_designer/workflows/surface_energy.ipynb
  • other/materials_designer/workflows/total_energy.ipynb
  • other/materials_designer/workflows/total_energy_convergence.ipynb
  • other/materials_designer/workflows/total_energy_post_processing.ipynb
  • other/materials_designer/workflows/valence_band_offset.ipynb
  • other/materialsproject/api_interoperability_showcase.ipynb
  • other/materialsproject/api_interoperability_showcase.py
  • other/materialsproject/import_materials_from_materialsproject.ipynb
  • other/materialsproject/import_materials_from_materialsproject.py
  • other/webinar/generate-al2o3-slab-structures.py
  • other/webinar/wulff-construction-surface-energy-study-cu.py
  • packages/ase-3.25.0-py3-none-any.whl
  • packages/pymatgen_analysis_defects-2024.4.23-py3-none-any.whl
  • pyproject.toml
  • src/py/mat3ra/__init__.py
  • src/py/mat3ra/notebooks/__init__.py
  • src/py/mat3ra/notebooks/utils/__init__.py
  • src/py/mat3ra/notebooks/utils/core/__init__.py
  • src/py/mat3ra/notebooks/utils/core/json.py
  • src/py/mat3ra/notebooks/utils/core/settings.json
  • src/py/mat3ra/notebooks/utils/core/settings.py
  • src/py/mat3ra/notebooks/utils/core/web/renderjson.css
  • src/py/mat3ra/notebooks/utils/core/web/renderjson.js
  • src/py/mat3ra/notebooks/utils/other/__init__.py
  • src/py/mat3ra/notebooks/utils/other/api.py
  • src/py/mat3ra/notebooks/utils/other/auth.py
  • src/py/mat3ra/notebooks/utils/other/equation_of_state.py
  • src/py/mat3ra/notebooks/utils/other/generic.py
  • src/py/mat3ra/notebooks/utils/other/io.py
  • src/py/mat3ra/notebooks/utils/other/job_properties.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/__init__.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/enums.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/environment.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/interrupts.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/io.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/logger.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/packages.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/settings.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py
  • src/py/mat3ra/notebooks/utils/other/material.py
  • src/py/mat3ra/notebooks/utils/other/notebook.py
  • src/py/mat3ra/notebooks/utils/other/plot.py
  • src/py/mat3ra/notebooks/utils/other/torch_pyodide.py
  • src/py/mat3ra/notebooks/utils/other/visualize.py

Comment thread config.yml Outdated
Comment on lines +8 to +13
packages_pyodide:
- annotated_types>=0.6.0
- ipywidgets
- typing-extensions==4.14.1
- emfs:/drive/packages/pydantic_core-2.18.2-py3-none-any.whl
- emfs:/drive/packages/pydantic-2.7.1-py3-none-any.whl
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy lift

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check which notebooks rely on packages removed from defaults

# Search for common packages that were removed from defaults
rg -t ipynb -n "import (pandas|scipy|sympy|numpy)" examples/ other/ --iglob '*.ipynb' -A 2 -B 2

Repository: Exabyte-io/api-examples

Length of output: 97


🏁 Script executed:

#!/bin/bash
# First, find and read the config.yml file
find . -name "config.yml" -type f

Repository: Exabyte-io/api-examples

Length of output: 80


🏁 Script executed:

#!/bin/bash
# Find all notebook files in the repository
find . -name "*.ipynb" -type f | head -20

Repository: Exabyte-io/api-examples

Length of output: 1238


🏁 Script executed:

#!/bin/bash
# Search for import statements in notebooks using grep on JSON content
# .ipynb files are JSON with code cells containing imports
find . -name "*.ipynb" -type f -exec grep -l "\"import pandas\|\"import scipy\|\"import sympy\|\"import numpy" {} \;

Repository: Exabyte-io/api-examples

Length of output: 1179


🏁 Script executed:

#!/bin/bash
# Read the config.yml to see the full structure
cat -n ./config.yml

Repository: Exabyte-io/api-examples

Length of output: 3613


🏁 Script executed:

#!/bin/bash
# For notebooks that import pandas/scipy/numpy, check if they have notebook-specific config
# Let's look at one example: run-simulations-and-extract-properties.ipynb
python3 << 'EOF'
import json
import os

notebook_path = "./examples/job/run-simulations-and-extract-properties.ipynb"

if os.path.exists(notebook_path):
    with open(notebook_path, 'r') as f:
        nb = json.load(f)
    
    # Extract imports from code cells
    imports = set()
    for cell in nb.get('cells', []):
        if cell.get('cell_type') == 'code':
            source = ''.join(cell.get('source', []))
            # Look for common imports
            for pkg in ['pandas', 'numpy', 'scipy', 'matplotlib', 'json', 'requests']:
                if f'import {pkg}' in source or f'from {pkg}' in source:
                    imports.add(pkg)
    
    print(f"Notebook: {notebook_path}")
    print(f"Imports found: {imports if imports else 'None of the common packages'}")
else:
    print("Notebook not found")
EOF

Repository: Exabyte-io/api-examples

Length of output: 171


🏁 Script executed:

#!/bin/bash
# Check which of the notebooks that import pandas/scipy/numpy have configs in config.yml
# List all notebook-specific configurations first
echo "=== Notebook-specific configurations in config.yml ==="
rg "name:" config.yml

echo ""
echo "=== Checking if notebooks with pandas/scipy imports have configs ==="

# Extract list of configured notebooks
configured_notebooks=$(rg "name: .*\.ipynb" config.yml -o | sed 's/name: //')
echo "Configured notebooks:"
echo "$configured_notebooks"

echo ""
echo "=== Checking a few notebooks that import pandas ==="
python3 << 'EOF'
import json
import os

# Check a few notebooks
notebooks_to_check = [
    "./examples/job/run-simulations-and-extract-properties.ipynb",
    "./examples/job/ml-train-model-predict-properties.ipynb",
    "./examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb"
]

for nb_path in notebooks_to_check:
    if os.path.exists(nb_path):
        with open(nb_path, 'r') as f:
            nb = json.load(f)
        
        imports = set()
        for cell in nb.get('cells', []):
            if cell.get('cell_type') == 'code':
                source = ''.join(cell.get('source', []))
                for pkg in ['pandas', 'numpy', 'scipy', 'matplotlib']:
                    if f'import {pkg}' in source or f'from {pkg}' in source:
                        imports.add(pkg)
        
        print(f"\n{nb_path}")
        print(f"  Imports: {imports if imports else 'None of analyzed packages'}")
    else:
        print(f"\n{nb_path}: NOT FOUND")
EOF

Repository: Exabyte-io/api-examples

Length of output: 853


🏁 Script executed:

#!/bin/bash
# Let's check if there are other notebooks that might be in directory-based configs
echo "=== Looking at the 'made' notebook configuration ==="
rg -A 20 "name: made" config.yml

echo ""
echo "=== Checking how notebooks map to config names ==="
echo "The 'made' entry in config.yml likely refers to notebooks in a 'made' directory or named 'made.ipynb'"
find . -path "*made*" -name "*.ipynb" | head -5

echo ""
echo "=== Check if 'specific_examples' is a directory-based config ==="
find . -path "*specific_examples*" -name "*.ipynb" | head -5

echo ""
echo "=== Check if 'api_examples' is a directory-based config ==="
find . -path "*api_examples*" -name "*.ipynb" | head -5

Repository: Exabyte-io/api-examples

Length of output: 1838


🏁 Script executed:

#!/bin/bash
# Let's count how many notebooks are not covered by specific configurations
echo "=== Total notebooks ==="
find . -name "*.ipynb" -type f | wc -l

echo ""
echo "=== Notebooks in directories/paths that might match configured names ==="
echo "Checking for 'made' notebooks:"
find . -path "*made*" -name "*.ipynb" | wc -l

echo "Checking for 'api_examples' notebooks:"
find . -path "*api_examples*" -name "*.ipynb" | wc -l

echo "Checking for 'api' notebooks:"
find . -path "*api*" -name "*.ipynb" | grep -v api_examples | wc -l

echo "Checking for 'torch' notebooks:"
find . -path "*torch*" -name "*.ipynb" | wc -l

echo "Checking for 'mace' notebooks:"
find . -path "*mace*" -name "*.ipynb" | wc -l

echo ""
echo "=== Summary: Notebooks without explicit configuration ==="
python3 << 'EOF'
import json
import os
import re

# Find all notebooks
all_notebooks = []
for root, dirs, files in os.walk('.'):
    for f in files:
        if f.endswith('.ipynb'):
            all_notebooks.append(os.path.join(root, f))

# List configured patterns (these are directory-based or filename-based)
configured_patterns = ['made', 'import_material_from_jarvis_db_entry.ipynb', 
                      'import_materials_from_files.ipynb', 'specific_examples',
                      'api', 'api_examples', 'torch', 'mace']

unconfigured = []
for nb in all_notebooks:
    # Check if notebook matches any configured pattern
    is_configured = any(pattern in nb for pattern in configured_patterns)
    if not is_configured:
        unconfigured.append(nb)

print(f"Total notebooks: {len(all_notebooks)}")
print(f"\nUnconfigured notebooks ({len(unconfigured)}):")
for nb in sorted(unconfigured)[:10]:  # Show first 10
    print(f"  {nb}")
if len(unconfigured) > 10:
    print(f"  ... and {len(unconfigured) - 10} more")
EOF

Repository: Exabyte-io/api-examples

Length of output: 1066


🏁 Script executed:

#!/bin/bash
# Let's check if pandas is in the minimal default set or if it was removed
echo "=== Checking what packages are in the minimal default ==="
rg -A 6 "default:" config.yml

echo ""
echo "=== Verifying a notebook that needs pandas works with only default packages ==="
# Let's check the first few lines of an unconfigured notebook that uses pandas
python3 << 'EOF'
import json

nb_path = "./examples/job/run-simulations-and-extract-properties.ipynb"
with open(nb_path, 'r') as f:
    nb = json.load(f)

# Find the first cell that uses pandas
for i, cell in enumerate(nb.get('cells', [])):
    if cell.get('cell_type') == 'code':
        source = ''.join(cell.get('source', []))
        if 'import pandas' in source or 'from pandas' in source:
            print(f"Cell {i}: First pandas usage")
            print(source[:500])
            break
EOF

echo ""
echo "=== Check git history to see if pandas was in defaults before ==="
git log --oneline -n 20 -- config.yml

Repository: Exabyte-io/api-examples

Length of output: 1073


🏁 Script executed:

#!/bin/bash
# Let's check if there's documentation about what packages should be available
echo "=== Check README or docs for package information ==="
if [ -f README.md ]; then
    rg -i "pandas|scipy|numpy|package" README.md | head -20
fi

echo ""
echo "=== Check if there's a requirements.txt or environment.yml ==="
find . -maxdepth 2 -name "requirements*.txt" -o -name "environment*.yml"

Repository: Exabyte-io/api-examples

Length of output: 245


Unconfigured notebooks will fail with the reduced default package set.

77 out of 107 notebooks in the repository lack explicit configuration in config.yml and will inherit only the minimal default packages (annotated_types, ipywidgets, typing-extensions, pydantic_core, pydantic). At least 16 of these unconfigured notebooks import pandas, which is not in the defaults. Examples:

  • ./examples/job/run-simulations-and-extract-properties.ipynb
  • ./examples/job/ml-train-model-predict-properties.ipynb
  • ./examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb

These notebooks will fail at runtime when executed in the pyodide environment.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@config.yml` around lines 8 - 13, The default packages_pyodide list is too
minimal and many notebooks (e.g.,
examples/job/run-simulations-and-extract-properties.ipynb,
examples/job/ml-train-model-predict-properties.ipynb,
examples/reproducing_publications/...) import pandas and will fail; update the
packages_pyodide block in config.yml to include the missing runtime deps (at
minimum "pandas" and any common dependencies like "numpy") or add per-notebook
config entries that specify the required packages for those notebooks so they
don't inherit the reduced default set.

"<h2 style=\"color:green\">Usage</h2>\n",
"\n",
"1. To get interface material one can run [Create Interface with Min Strain ZSL](create_interface_with_min_strain_zsl.ipynb) notebook first. Upon completion, the resulting interface from it will be accessible in the current notebook as explained in [Data exchange](Introduction.ipynb#data-exchange).\n",
"1. To get interface material one can run [Create Interface with Min Strain ZSL](made) notebook first. Upon completion, the resulting interface from it will be accessible in the current notebook as explained in [Data exchange](Introduction.ipynb#data-exchange).\n",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Broken markdown link target.

The link [Create Interface with Min Strain ZSL](made) has an invalid target made. It should point to the actual notebook file, likely create_interface_with_min_strain_zsl.ipynb.

🔗 Proposed fix
-1. To get interface material one can run [Create Interface with Min Strain ZSL](made) notebook first. Upon completion, the resulting interface from it will be accessible in the current notebook as explained in [Data exchange](Introduction.ipynb#data-exchange).
+1. To get interface material one can run [Create Interface with Min Strain ZSL](create_interface_with_min_strain_zsl.ipynb) notebook first. Upon completion, the resulting interface from it will be accessible in the current notebook as explained in [Data exchange](Introduction.ipynb#data-exchange).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb` at
line 16, The markdown link target "made" in the sentence containing "[Create
Interface with Min Strain ZSL](made)" is invalid; update the link target to the
actual notebook filename, e.g., replace the target "made" with
"create_interface_with_min_strain_zsl.ipynb" so the link becomes "[Create
Interface with Min Strain ZSL](create_interface_with_min_strain_zsl.ipynb)";
ensure the link text "[Create Interface with Min Strain ZSL]" remains unchanged
and verify the referenced notebook file exists.

Comment on lines +49 to +51
get_ipython().system(
'GIT_BRANCH="dev"; export GIT_BRANCH; curl -s "https://raw.githubusercontent.com/Exabyte-io/api-examples/${GIT_BRANCH}/scripts/env.sh" | bash'
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

cat -n other/materialsproject/api_interoperability_showcase.py | head -70

Repository: Exabyte-io/api-examples

Length of output: 4149


🏁 Script executed:

rg "get_ipython" other/materialsproject/api_interoperability_showcase.py

Repository: Exabyte-io/api-examples

Length of output: 93


🏁 Script executed:

head -60 other/materialsproject/api_interoperability_showcase.py | tail -40

Repository: Exabyte-io/api-examples

Length of output: 1736


Guard IPython-only shell execution for script compatibility.

Line 49 calls get_ipython().system(...) without an explicit import. Although wrapped in an if "COLAB_JUPYTER_IP" in os.environ: check, this environment variable check alone does not guarantee get_ipython() is available (it only exists in interactive IPython/Jupyter contexts), leaving the code vulnerable to F821 (undefined name) linting errors and runtime failures if the environment variable is set outside a Colab notebook.

Proposed fix
+import subprocess
@@
-    get_ipython().system(
-        'GIT_BRANCH="dev"; export GIT_BRANCH; curl -s "https://raw.githubusercontent.com/Exabyte-io/api-examples/${GIT_BRANCH}/scripts/env.sh" | bash'
-    )
+    subprocess.run(
+        'GIT_BRANCH="dev"; export GIT_BRANCH; curl -s "https://raw.githubusercontent.com/Exabyte-io/api-examples/${GIT_BRANCH}/scripts/env.sh" | bash',
+        shell=True,
+        check=True,
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
get_ipython().system(
'GIT_BRANCH="dev"; export GIT_BRANCH; curl -s "https://raw.githubusercontent.com/Exabyte-io/api-examples/${GIT_BRANCH}/scripts/env.sh" | bash'
)
subprocess.run(
'GIT_BRANCH="dev"; export GIT_BRANCH; curl -s "https://raw.githubusercontent.com/Exabyte-io/api-examples/${GIT_BRANCH}/scripts/env.sh" | bash',
shell=True,
check=True,
)
🧰 Tools
🪛 Ruff (0.15.12)

[error] 49-49: Undefined name get_ipython

(F821)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materialsproject/api_interoperability_showcase.py` around lines 49 -
51, The call to get_ipython().system(...) should be guarded so it won’t raise
NameError or lint errors when IPython is unavailable; import get_ipython from
IPython and call it into a variable (e.g., ip = get_ipython()), then only call
ip.system(...) if ip is not None and the existing environment check
("COLAB_JUPYTER_IP" in os.environ) passes; update the code around get_ipython()
to use from IPython import get_ipython and the ip is not None guard instead of
calling get_ipython() directly.

Comment on lines +194 to +201
if optimal_point is not None:
x_opt, y_opt = optimal_point
z_opt = np.min(z_matrix)
fig.add_trace(
go.Scatter3d(
x=[x_opt], y=[y_opt], z=[z_opt], mode="markers", marker=dict(size=8, color="red"), name="Optimal Point"
)
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Incorrect z-coordinate for optimal point marker.

Line 196 sets z_opt = np.min(z_matrix), which finds the global minimum across the entire surface. However, the optimal point marker should be placed at the z-value corresponding to the (x_opt, y_opt) coordinates, not the global minimum.

🐛 Proposed fix

To correctly extract the z-value at (x_opt, y_opt), you need to find the indices in x_matrix and y_matrix that correspond to these coordinates:

     if optimal_point is not None:
         x_opt, y_opt = optimal_point
-        z_opt = np.min(z_matrix)
+        # Find indices closest to optimal point
+        x_idx = np.argmin(np.abs(x_matrix[0, :] - x_opt))
+        y_idx = np.argmin(np.abs(y_matrix[:, 0] - y_opt))
+        z_opt = z_matrix[y_idx, x_idx]
         fig.add_trace(

Note: This assumes x_matrix and y_matrix are meshgrid outputs. Adjust indexing if they have a different structure.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/plot.py` around lines 194 -
201, The marker for optimal_point uses z_opt = np.min(z_matrix) which is wrong;
instead locate the z-value at (x_opt, y_opt) by finding the indices in the
meshgrid arrays x_matrix and y_matrix that match (or are closest to) x_opt and
y_opt (e.g. compute idx = argmin(|x_matrix - x_opt|) and jdx = argmin(|y_matrix
- y_opt|) or find where both equal) and then set z_opt = z_matrix[idx, jdx]
before calling fig.add_trace for the marker; update the block that computes
z_opt (referencing optimal_point, x_opt, y_opt, x_matrix, y_matrix, z_matrix,
and fig.add_trace) accordingly.

Comment on lines +7 to +24
def read_from_url_python(url: str, as_bytes: bool = False) -> Union[str, bytes]:
"""
Fetch, read, and decode content from a URL in a Python environment.

Args:
url (str): The URL to fetch from.
as_bytes (bool): Whether to return the content as bytes.

Returns:
str: The content as a string or bytes.
"""
import urllib.request

with urllib.request.urlopen(url) as response:
body = response.read()
if as_bytes:
return body
return body.decode("utf-8")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add error handling for network failures.

The function does not handle network errors (e.g., URLError, HTTPError, timeouts), which will cause unhandled exceptions to propagate to callers.

🛡️ Proposed fix to add error handling
 def read_from_url_python(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Fetch, read, and decode content from a URL in a Python environment.

     Args:
         url (str): The URL to fetch from.
         as_bytes (bool): Whether to return the content as bytes.

     Returns:
         Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        ValueError: If the URL scheme is not supported.
+        urllib.error.URLError: If the URL cannot be fetched.
     """
     import urllib.request
+    from urllib.error import URLError
     from urllib.parse import urlparse

     parsed = urlparse(url)
     if parsed.scheme not in ("http", "https"):
         raise ValueError(f"Unsupported URL scheme: {parsed.scheme}. Only http and https are allowed.")

-    with urllib.request.urlopen(url, timeout=30) as response:
-        body = response.read()
-        if as_bytes:
-            return body
-        return body.decode("utf-8")
+    try:
+        with urllib.request.urlopen(url, timeout=30) as response:
+            body = response.read()
+            if as_bytes:
+                return body
+            return body.decode("utf-8")
+    except URLError as e:
+        raise URLError(f"Failed to fetch URL {url}: {e}") from e
🧰 Tools
🪛 Ruff (0.15.12)

[error] 20-20: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py` around lines 7 - 24,
read_from_url_python currently lets network exceptions (e.g., URLError,
HTTPError, socket.timeout) propagate; wrap the urllib.request.urlopen(...) call
in a try/except that catches urllib.error.URLError, urllib.error.HTTPError and
socket.timeout, and handle them by raising a clear RuntimeError (or a custom
exception) that includes the original exception message for debugging; ensure
you import urllib.error and socket and preserve the as_bytes behavior when
returning successful responses in read_from_url_python.

Comment on lines +20 to +24
with urllib.request.urlopen(url) as response:
body = response.read()
if as_bytes:
return body
return body.decode("utf-8")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Restrict URL schemes and add timeout to prevent SSRF and hanging.

The urlopen call accepts any URL scheme (including file://, ftp://, etc.), which poses a security risk. Additionally, without a timeout, the call can block indefinitely on slow or unresponsive endpoints.

🔒 Proposed fix to restrict schemes and add timeout
 def read_from_url_python(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Fetch, read, and decode content from a URL in a Python environment.

     Args:
         url (str): The URL to fetch from.
         as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
     """
     import urllib.request
+    from urllib.parse import urlparse
+
+    parsed = urlparse(url)
+    if parsed.scheme not in ("http", "https"):
+        raise ValueError(f"Unsupported URL scheme: {parsed.scheme}. Only http and https are allowed.")

-    with urllib.request.urlopen(url) as response:
+    with urllib.request.urlopen(url, timeout=30) as response:
         body = response.read()
         if as_bytes:
             return body
         return body.decode("utf-8")
🧰 Tools
🪛 Ruff (0.15.12)

[error] 20-20: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py` around lines 20 - 24,
The urllib.request.urlopen call in the function handling url (variables url and
as_bytes) must restrict accepted URL schemes to only "http" and "https" and
specify a timeout to avoid SSRF and hanging; before calling
urllib.request.urlopen, parse url with urllib.parse.urlparse, validate that
parsed.scheme is "http" or "https" (raise a ValueError or custom exception
otherwise), and then call urllib.request.urlopen(url,
timeout=YOUR_TIMEOUT_SECONDS) instead of the current call, ensuring any
exceptions from urlopen are propagated or converted to a clear error for the
caller.

Comment on lines +27 to +45
async def read_from_url_pyodide(url: str, as_bytes: bool = False) -> Union[str, bytes]:
"""
Fetch and read content from a URL in a Pyodide environment.

Args:
url (str): The URL to fetch from.
as_bytes (bool): Whether to return the content as bytes.

Returns:
str: The content as a string or bytes.
"""
# `http` is a Pyodide module that will be installed in the Pyodide environment by default.
from pyodide.http import pyfetch # type: ignore

# Per https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
response = await pyfetch(url)
if as_bytes:
return await response.bytes()
return await response.string()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add error handling and fix docstring return type.

The async Pyodide implementation lacks error handling for fetch failures and has an inconsistent return type annotation in the docstring.

🛡️ Proposed fix
 async def read_from_url_pyodide(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Fetch and read content from a URL in a Pyodide environment.

     Args:
         url (str): The URL to fetch from.
         as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        Exception: If the fetch operation fails.
     """
     # `http` is a Pyodide module that will be installed in the Pyodide environment by default.
     from pyodide.http import pyfetch  # type: ignore

-    # Per https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
-    response = await pyfetch(url)
-    if as_bytes:
-        return await response.bytes()
-    return await response.string()
+    try:
+        # Per https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
+        response = await pyfetch(url)
+        if not response.ok:
+            raise Exception(f"HTTP {response.status}: {response.status_text}")
+        if as_bytes:
+            return await response.bytes()
+        return await response.string()
+    except Exception as e:
+        raise Exception(f"Failed to fetch URL {url}: {e}") from e
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py` around lines 27 - 45,
The function read_from_url_pyodide currently lacks error handling and its
docstring return type is incorrect; update the docstring Returns section to
state "Union[str, bytes]" and wrap the pyfetch call and subsequent response
reads in a try/except that checks response.status (or response.ok) and raises a
descriptive exception (or returns a controlled error) on non-2xx responses and
also catches network errors from pyfetch to log/raise a useful error; ensure the
code still returns await response.bytes() when as_bytes is True and await
response.string() otherwise, and preserve the existing pyodide import and async
signature.

Comment on lines +48 to +65
async def read_from_url(url: str, as_bytes: bool = False) -> Union[str, bytes]:
"""
Read content from a URL, handling both Python and Pyodide environments.

Args:
url (str): The URL to fetch from.
as_bytes (bool): Whether to return the content as

Returns:
str: The content as a string or bytes.
"""
response = None
if ENVIRONMENT == EnvironmentsEnum.PYODIDE:
response = await read_from_url_pyodide(url, as_bytes)
elif ENVIRONMENT == EnvironmentsEnum.PYTHON:
response = read_from_url_python(url, as_bytes)

return response
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fix incomplete docstring and handle unexpected ENVIRONMENT values.

The docstring at line 54 is incomplete, and the function can return None if ENVIRONMENT doesn't match either enum value, which violates the return type annotation.

📝 Proposed fix
 async def read_from_url(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Read content from a URL, handling both Python and Pyodide environments.

     Args:
         url (str): The URL to fetch from.
-        as_bytes (bool): Whether to return the content as
+        as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        ValueError: If the environment is not recognized.
     """
-    response = None
     if ENVIRONMENT == EnvironmentsEnum.PYODIDE:
-        response = await read_from_url_pyodide(url, as_bytes)
+        return await read_from_url_pyodide(url, as_bytes)
     elif ENVIRONMENT == EnvironmentsEnum.PYTHON:
-        response = read_from_url_python(url, as_bytes)
-
-    return response
+        return read_from_url_python(url, as_bytes)
+    else:
+        raise ValueError(f"Unsupported environment: {ENVIRONMENT}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def read_from_url(url: str, as_bytes: bool = False) -> Union[str, bytes]:
"""
Read content from a URL, handling both Python and Pyodide environments.
Args:
url (str): The URL to fetch from.
as_bytes (bool): Whether to return the content as
Returns:
str: The content as a string or bytes.
"""
response = None
if ENVIRONMENT == EnvironmentsEnum.PYODIDE:
response = await read_from_url_pyodide(url, as_bytes)
elif ENVIRONMENT == EnvironmentsEnum.PYTHON:
response = read_from_url_python(url, as_bytes)
return response
async def read_from_url(url: str, as_bytes: bool = False) -> Union[str, bytes]:
"""
Read content from a URL, handling both Python and Pyodide environments.
Args:
url (str): The URL to fetch from.
as_bytes (bool): Whether to return the content as bytes.
Returns:
Union[str, bytes]: The content as a string or bytes.
Raises:
ValueError: If the environment is not recognized.
"""
if ENVIRONMENT == EnvironmentsEnum.PYODIDE:
return await read_from_url_pyodide(url, as_bytes)
elif ENVIRONMENT == EnvironmentsEnum.PYTHON:
return read_from_url_python(url, as_bytes)
else:
raise ValueError(f"Unsupported environment: {ENVIRONMENT}")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/url.py` around lines 48 - 65,
The read_from_url function has an incomplete docstring and may return None if
ENVIRONMENT is neither EnvironmentsEnum.PYODIDE nor EnvironmentsEnum.PYTHON;
update the docstring to complete the description of the as_bytes parameter and
the return value, and add explicit handling for unexpected ENVIRONMENT values
(e.g., raise a ValueError or TypeError with a clear message) so read_from_url
always returns Union[str, bytes] or raises; reference the ENVIRONMENT variable
and EnvironmentsEnum, and ensure you call read_from_url_pyodide(url, as_bytes)
or read_from_url_python(url, as_bytes) accordingly.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@config.yml`:
- Line 22: The dependency entry contains a malformed requirement string
'ase>=3.25.0",' — remove the stray quote and trailing comma so the requirement
becomes a valid YAML list item (e.g., ase>=3.25.0) or wrap the entire
requirement correctly in quotes if the list format requires it; update the entry
for the ASE package (the string "ase>=3.25.0"," in config.yml) to be a valid
YAML scalar.
- Line 9: The config.yml URL port (http://localhost:8090/...) doesn't match the
default in wheel_server.py (port 8080); fix by either updating the URL in
config.yml to use port 8080 to match the server default, or modify
wheel_server.py to change the default port to 8090 (or add CLI
flag/documentation to require running wheel_server.py with --port 8090); update
the single instance of the URL in config.yml if choosing the first option, or
change the default_port / argparse default in wheel_server.py (and any tests) if
choosing the second.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 31677f87-0881-4f9b-a33d-28c63a8350c3

📥 Commits

Reviewing files that changed from the base of the PR and between 21cafa3 and 273522d.

📒 Files selected for processing (1)
  • config.yml

Comment thread config.yml Outdated
Comment thread config.yml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (5)
other/materials_designer/create_point_defect_pair.ipynb (1)

128-128: ⚡ Quick win

Remove duplicate import.

The same import appears in both cell 8 (line 128) and cell 12 (line 185). Since the notebook instructions recommend running all cells sequentially, the second import is redundant. Consider removing the duplicate from line 185.

♻️ Proposed fix to remove duplicate import

Remove the import from line 185 since it's already imported in line 128:

-from mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize
-
 visualize([{"material": supercell, "title": "Original material"},
            {"material": material_with_defect, "title": f"Material with defect"}],

Also applies to: 185-185

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_point_defect_pair.ipynb` at line 128, Remove
the redundant duplicate import of "from mat3ra.notebooks.utils.other.visualize
import visualize_materials as visualize" by deleting the second occurrence (the
one in the later cell) so the notebook only imports visualize once; keep the
first import (the one that defines visualize) and remove the later duplicate to
avoid redundancy when running cells sequentially.
other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb (1)

185-185: 💤 Low value

Remove the redundant import.

The visualize_materials function is already imported at line 135. This duplicate import is unnecessary.

♻️ Proposed fix to remove duplicate import
-from mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize
-
 for material_with_defect in materials_with_adatom:
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb`
at line 185, There is a redundant duplicate import of visualize_materials
(aliased as visualize) — remove the second import line "from
mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize"
so only the original import at line where visualize_materials is first imported
remains; ensure any uses reference the existing imported symbol (visualize or
visualize_materials) consistently.
other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb (1)

168-168: 💤 Low value

Remove the redundant import.

The visualize_materials function is already imported at line 96. This duplicate import is unnecessary.

♻️ Proposed fix to remove duplicate import
-from mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize
-
 visualize([{\"material\": supercell, \"title\": \"Original material\"},
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb`
at line 168, Remove the redundant import of visualize_materials by deleting the
duplicate line that imports "from mat3ra.notebooks.utils.other.visualize import
visualize_materials as visualize"; keep the original import of
visualize_materials (already present elsewhere) and ensure only the single
import of visualize_materials/visualize remains to avoid duplication.
other/materials_designer/create_heterostructure_example.ipynb (1)

307-307: ⚡ Quick win

Consider consolidating imports from mat3ra.notebooks.utils.other.jupyterlite.

Lines 149 and 307 both import from the same module. Combine these into a single import statement for better code organization:

from mat3ra.notebooks.utils.other.jupyterlite import get_materials, set_materials
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_heterostructure_example.ipynb` at line 307,
Two separate import lines pull from the same module; consolidate them into one
import that brings in both functions to improve organization. Replace the
separate imports of get_materials and set_materials from
mat3ra.notebooks.utils.other.jupyterlite with a single statement importing both
(e.g., import get_materials, set_materials from that module) and remove the
duplicate line, ensuring any cells that use get_materials or set_materials still
reference the same names.
pyproject.toml (1)

97-97: ⚡ Quick win

Align Ruff target-version with the Python 3.10 baseline.

Line 97 sets Black to py310, but Ruff is still py38 (Line 113). Keep them consistent to avoid conflicting lint/format expectations.

Minimal fix
 [tool.ruff]
-target-version = "py38"
+target-version = "py310"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pyproject.toml` at line 97, The Ruff configuration's target-version is
inconsistent with Black: update Ruff's target-version (the value under
[tool.ruff], currently 'py38') to 'py310' to match Black's target-version =
['py310']; locate the [tool.ruff] block and change the target-version setting to
the same Python 3.10 identifier so both formatters/linters share the same
baseline.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@other/materials_designer/create_heterostructure_example.ipynb`:
- Around line 124-127: Replace the nonexistent package name in the
micropip.install call: change the string passed to micropip.install from
'mat3ra-notebooks-utils' to 'mat3ra-api-examples' so the import from
mat3ra.notebooks.utils.other.jupyterlite.packages and the await
install_packages("made|specific_examples") call work with the correct PyPI
package; specifically update the micropip.install invocation referenced in the
diff that precedes the import of
mat3ra.notebooks.utils.other.jupyterlite.packages.

In `@other/materials_designer/create_nanowire.ipynb`:
- Around line 64-66: The notebook tries to install a non-PyPI package via
micropip.install('mat3ra-notebooks-utils') which will fail in Pyodide; either
remove or guard that install and document the local-only requirement, or switch
to a custom wheel server — specifically, remove or wrap the micropip.install
call and instead rely on the existing import from
mat3ra.notebooks.utils.other.jupyterlite.packages and the await
install_packages("made") call, or add a conditional that detects
Pyodide/JupyterLite and points to a configured wheel server or a clear error
message instructing users to pip install the repo locally (e.g., via pip install
-e .) before running the notebook.

In `@other/materials_designer/create_perturbation_custom.ipynb`:
- Line 225: The call to set_materials from
mat3ra.notebooks.utils.other.jupyterlite currently passes a single material
object; update the call to pass a list by wrapping the material in square
brackets so it matches the signature set_materials(materials: List[Any]) — e.g.
replace the existing call with set_materials([material_with_perturbation])
referencing the set_materials function and the material_with_perturbation
variable.

In `@other/materials_designer/create_perturbation.ipynb`:
- Around line 72-75: Several notebooks still import the old namespace
mat3ra.utils.jupyterlite.*; open each listed notebook
(examples/workflow/qe_scf_calculation.ipynb,
examples/job/get-file-from-job.ipynb,
other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb,
other/materials_designer/optimize_film_position.ipynb,
other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb)
and replace imports and references of mat3ra.utils.jupyterlite (e.g., any "from
mat3ra.utils.jupyterlite.* import install_packages" or similar) with the new
namespace mat3ra.notebooks.utils.other.jupyterlite (or the more general
mat3ra.notebooks.utils.other.* as used in this diff) and update usages of
install_packages/micropip.install calls accordingly so they import
install_packages from mat3ra.notebooks.utils.other.jupyterlite.packages (or
mat3ra.notebooks.utils.other) and call await install_packages("made") as in the
updated file.

In `@other/materials_designer/create_supercell.ipynb`:
- Around line 67-69: Remove the redundant manual installation call
micropip.install('mat3ra-notebooks-utils') from the notebook: delete the line
invoking micropip.install so only the import from
mat3ra.notebooks.utils.other.jupyterlite.packages remains and keep the
subsequent await install_packages("made") call intact; this relies on
mat3ra-notebooks-utils being provided by the Pyodide config and avoids the
failing micropip install.

In `@pyproject.toml`:
- Around line 27-35: The extras entries referencing the wrong distribution name
must point to this project's package name; replace occurrences of
"mat3ra-notebooks[materials]" and "mat3ra-notebooks" in the extras lists with
"mat3ra-notebooks-utils[materials]" / "mat3ra-notebooks-utils" (and similarly
for the "workflows", "colab", and "localhost" extras), i.e., update the strings
shown in the diff that currently read "mat3ra-notebooks[materials]" (lines near
the materials/workflows lists) and the standalone "mat3ra-notebooks" entries so
they use "mat3ra-notebooks-utils" to ensure self-references resolve to the
correct distribution.

---

Nitpick comments:
In `@other/materials_designer/create_heterostructure_example.ipynb`:
- Line 307: Two separate import lines pull from the same module; consolidate
them into one import that brings in both functions to improve organization.
Replace the separate imports of get_materials and set_materials from
mat3ra.notebooks.utils.other.jupyterlite with a single statement importing both
(e.g., import get_materials, set_materials from that module) and remove the
duplicate line, ensuring any cells that use get_materials or set_materials still
reference the same names.

In `@other/materials_designer/create_point_defect_pair.ipynb`:
- Line 128: Remove the redundant duplicate import of "from
mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize"
by deleting the second occurrence (the one in the later cell) so the notebook
only imports visualize once; keep the first import (the one that defines
visualize) and remove the later duplicate to avoid redundancy when running cells
sequentially.

In
`@other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb`:
- Line 168: Remove the redundant import of visualize_materials by deleting the
duplicate line that imports "from mat3ra.notebooks.utils.other.visualize import
visualize_materials as visualize"; keep the original import of
visualize_materials (already present elsewhere) and ensure only the single
import of visualize_materials/visualize remains to avoid duplication.

In
`@other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb`:
- Line 185: There is a redundant duplicate import of visualize_materials
(aliased as visualize) — remove the second import line "from
mat3ra.notebooks.utils.other.visualize import visualize_materials as visualize"
so only the original import at line where visualize_materials is first imported
remains; ensure any uses reference the existing imported symbol (visualize or
visualize_materials) consistently.

In `@pyproject.toml`:
- Line 97: The Ruff configuration's target-version is inconsistent with Black:
update Ruff's target-version (the value under [tool.ruff], currently 'py38') to
'py310' to match Black's target-version = ['py310']; locate the [tool.ruff]
block and change the target-version setting to the same Python 3.10 identifier
so both formatters/linters share the same baseline.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a1a3b4a1-e9c8-4a77-8725-d853514a1082

📥 Commits

Reviewing files that changed from the base of the PR and between 273522d and 768cca9.

📒 Files selected for processing (74)
  • config.yml
  • examples/job/create_and_submit_job.ipynb
  • examples/job/get-file-from-job.ipynb
  • examples/job/ml-train-model-predict-properties.ipynb
  • examples/job/run-simulations-and-extract-properties.ipynb
  • examples/material/create_material.ipynb
  • examples/material/get_materials_by_formula.ipynb
  • examples/material/upload_materials_from_file_poscar.ipynb
  • examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/system/get_authentication_params.ipynb
  • examples/workflow/get_workflows.ipynb
  • examples/workflow/qe_scf_calculation.ipynb
  • other/materials_designer/create_adatom_defect.ipynb
  • other/materials_designer/create_cluster_custom_shape.ipynb
  • other/materials_designer/create_cluster_specific_shape.ipynb
  • other/materials_designer/create_cluster_sphere.ipynb
  • other/materials_designer/create_cutout_box.ipynb
  • other/materials_designer/create_cutout_custom_shape.ipynb
  • other/materials_designer/create_grain_boundary_crystal.ipynb
  • other/materials_designer/create_grain_boundary_film.ipynb
  • other/materials_designer/create_heterostructure_example.ipynb
  • other/materials_designer/create_interface_with_min_strain_zsl.ipynb
  • other/materials_designer/create_interface_with_no_strain_matching.ipynb
  • other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb
  • other/materials_designer/create_island_defect.ipynb
  • other/materials_designer/create_island_defect_custom.ipynb
  • other/materials_designer/create_maxwell_disorder.ipynb
  • other/materials_designer/create_monolayer.ipynb
  • other/materials_designer/create_nanoribbon.ipynb
  • other/materials_designer/create_nanowire.ipynb
  • other/materials_designer/create_nanowire_custom_shape.ipynb
  • other/materials_designer/create_perturbation.ipynb
  • other/materials_designer/create_perturbation_custom.ipynb
  • other/materials_designer/create_point_defect.ipynb
  • other/materials_designer/create_point_defect_pair.ipynb
  • other/materials_designer/create_slab.ipynb
  • other/materials_designer/create_supercell.ipynb
  • other/materials_designer/create_terrace_defect.ipynb
  • other/materials_designer/create_twisted_interface_with_commensurate_lattices.ipynb
  • other/materials_designer/create_twisted_interface_with_nanoribbons.ipynb
  • other/materials_designer/custom_transformation.ipynb
  • other/materials_designer/import_material_from_jarvis_db_entry.ipynb
  • other/materials_designer/import_materials_from_files.ipynb
  • other/materials_designer/import_materials_from_standata.ipynb
  • other/materials_designer/optimize_film_position.ipynb
  • other/materials_designer/passivate_edge.ipynb
  • other/materials_designer/passivate_slab.ipynb
  • other/materials_designer/specific_examples/custom_cutout_etched_silicon.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_2d_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_3d_fcc_metals_copper.ipynb
  • other/materials_designer/specific_examples/defect_point_adatom_island_molybdenum_disulfide_platinum.ipynb
  • other/materials_designer/specific_examples/defect_point_interstitial_tin_oxide.ipynb
  • other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene_simulation.ipynb
  • other/materials_designer/specific_examples/defect_point_vacancy_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb
  • other/materials_designer/specific_examples/defect_surface_island_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_step_platinum.ipynb
  • other/materials_designer/specific_examples/heterostructure_silicon_silicon_dioxide_hafnium_dioxide_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/interface_2d_2d_boron_nitride_graphene.ipynb
  • other/materials_designer/specific_examples/interface_2d_3d_graphene_silicon_dioxide.ipynb
  • other/materials_designer/specific_examples/interface_3d_3d_copper_cristobalite.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_commensurate_lattices_molybdenum_disulfide.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_nanoribbons_boron_nitride.ipynb
  • other/materials_designer/specific_examples/nanocluster_gold.ipynb
  • other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb
  • other/materials_designer/specific_examples/passivation_edge_nanowire_silicon.ipynb
  • other/materials_designer/specific_examples/passivation_surface_silicon.ipynb
  • other/materials_designer/specific_examples/perturbation_ripple_graphene.ipynb
  • other/materials_designer/specific_examples/slab_strontium_titanate.ipynb
  • other/materials_designer/under_the_hood.ipynb
  • pyproject.toml
✅ Files skipped from review due to trivial changes (6)
  • examples/material/upload_materials_from_file_poscar.ipynb
  • other/materials_designer/specific_examples/interface_2d_2d_boron_nitride_graphene.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_nanoribbons_boron_nitride.ipynb
  • other/materials_designer/import_materials_from_standata.ipynb
  • other/materials_designer/passivate_edge.ipynb
  • other/materials_designer/under_the_hood.ipynb
🚧 Files skipped from review as they are similar to previous changes (47)
  • other/materials_designer/create_adatom_defect.ipynb
  • examples/workflow/get_workflows.ipynb
  • examples/material/create_material.ipynb
  • other/materials_designer/create_cutout_custom_shape.ipynb
  • other/materials_designer/specific_examples/defect_surface_step_platinum.ipynb
  • other/materials_designer/create_cluster_custom_shape.ipynb
  • other/materials_designer/create_maxwell_disorder.ipynb
  • other/materials_designer/create_slab.ipynb
  • other/materials_designer/specific_examples/heterostructure_silicon_silicon_dioxide_hafnium_dioxide_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_3d_fcc_metals_copper.ipynb
  • other/materials_designer/specific_examples/passivation_surface_silicon.ipynb
  • examples/job/run-simulations-and-extract-properties.ipynb
  • other/materials_designer/passivate_slab.ipynb
  • other/materials_designer/create_monolayer.ipynb
  • examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/job/create_and_submit_job.ipynb
  • other/materials_designer/specific_examples/defect_point_interstitial_tin_oxide.ipynb
  • other/materials_designer/import_materials_from_files.ipynb
  • examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • other/materials_designer/create_nanoribbon.ipynb
  • other/materials_designer/specific_examples/defect_point_vacancy_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_island_titanium_nitride.ipynb
  • other/materials_designer/optimize_film_position.ipynb
  • examples/material/get_materials_by_formula.ipynb
  • other/materials_designer/create_grain_boundary_film.ipynb
  • examples/job/ml-train-model-predict-properties.ipynb
  • other/materials_designer/import_material_from_jarvis_db_entry.ipynb
  • other/materials_designer/specific_examples/interface_3d_3d_copper_cristobalite.ipynb
  • other/materials_designer/create_point_defect.ipynb
  • examples/job/get-file-from-job.ipynb
  • other/materials_designer/create_terrace_defect.ipynb
  • examples/system/get_authentication_params.ipynb
  • examples/workflow/qe_scf_calculation.ipynb
  • other/materials_designer/specific_examples/interface_2d_3d_graphene_silicon_dioxide.ipynb
  • other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb
  • other/materials_designer/create_twisted_interface_with_nanoribbons.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene_simulation.ipynb
  • other/materials_designer/create_island_defect_custom.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_commensurate_lattices_molybdenum_disulfide.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene.ipynb
  • other/materials_designer/specific_examples/passivation_edge_nanowire_silicon.ipynb
  • other/materials_designer/custom_transformation.ipynb
  • other/materials_designer/create_interface_with_min_strain_zsl.ipynb
  • config.yml
  • other/materials_designer/create_nanowire_custom_shape.ipynb
  • other/materials_designer/specific_examples/defect_point_adatom_island_molybdenum_disulfide_platinum.ipynb
  • other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb

Comment on lines +124 to +127
" await micropip.install('mat3ra-notebooks-utils')\n",
" from mat3ra.notebooks.utils.other.jupyterlite.packages import install_packages\n",
"\n",
" await install_packages(\"specific_examples\")\n"
" await install_packages(\"made|specific_examples\")\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify the mat3ra-notebooks-utils package and import paths

# Check if package exists in the repository or if there's documentation
fd -i -t f 'pyproject.toml|setup.py|setup.cfg' --exec grep -l 'mat3ra-notebooks-utils'

# Check for the new import paths in the codebase
rg -n --type=py 'mat3ra\.notebooks\.utils\.other\.jupyterlite\.packages'

# Check for install_packages function definition and its parameters
ast-grep --pattern 'def install_packages($$$)'

# Check for other usages of install_packages to understand the selector format
rg -n --type=py -A2 -B2 'install_packages.*made'

Repository: Exabyte-io/api-examples

Length of output: 2043


🌐 Web query:

mat3ra-notebooks-utils package documentation install_packages

💡 Result:

mat3ra-notebooks-utils appears to refer to utility modules within the mat3ra-api-examples package, specifically the utils.jupyterlite module, used in Mat3ra.com's JupyterLite notebooks for dependency management in Pyodide kernels [1][2]. It is not a standalone PyPI package; related packages are mat3ra-api-examples [3] and mat3ra-utils [4]. Official documentation is in Mat3ra platform docs [1][2]. No direct PyPI entry for "mat3ra-notebooks-utils" found across searches.

Citations:


Fix the package name from mat3ra-notebooks-utils to mat3ra-api-examples.

The package mat3ra-notebooks-utils does not exist on PyPI. The correct package to install is mat3ra-api-examples, which contains the mat3ra.notebooks.utils module. Line 124 should be:

await micropip.install('mat3ra-api-examples')

The import path on line 125 and the install_packages() call on line 127 are correct.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_heterostructure_example.ipynb` around lines
124 - 127, Replace the nonexistent package name in the micropip.install call:
change the string passed to micropip.install from 'mat3ra-notebooks-utils' to
'mat3ra-api-examples' so the import from
mat3ra.notebooks.utils.other.jupyterlite.packages and the await
install_packages("made|specific_examples") call work with the correct PyPI
package; specifically update the micropip.install invocation referenced in the
diff that precedes the import of
mat3ra.notebooks.utils.other.jupyterlite.packages.

Comment thread other/materials_designer/create_nanowire.ipynb Outdated
Comment thread other/materials_designer/create_perturbation_custom.ipynb Outdated
Comment thread other/materials_designer/create_perturbation.ipynb Outdated
Comment thread other/materials_designer/create_supercell.ipynb Outdated
Comment thread pyproject.toml Outdated
Comment thread pyproject.toml Outdated
utils = [
"settings.json",
"web/renderjson.*",
"mat3ra.notebooks.utils" = [
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notebooks_utils

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (1)
examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb (1)

209-209: 💤 Low value

Remove duplicate import.

The same import statement appears earlier at line 113.

♻️ Proposed fix
-from mat3ra.notebooks.utils.other.visualize import visualize_materials
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`
at line 209, Remove the duplicate import of visualize_materials by deleting the
later occurrence "from mat3ra.notebooks.utils.other.visualize import
visualize_materials" (the one at line 209) so only the original import remains;
search for the symbol visualize_materials to confirm the remaining import (the
earlier one at line 113) covers all uses and remove the redundant statement.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@examples/job/ml-train-model-predict-properties.ipynb`:
- Around line 137-141: The notebook imports wait_for_jobs_to_finish_async from
the wrong module; update the import to pull wait_for_jobs_to_finish_async from
mat3ra.notebooks.utils.other.job instead of mat3ra.notebooks.utils.other.api so
the symbol resolves at runtime; keep the other imports
(copy_bank_workflow_by_system_name,
get_property_by_subworkflow_and_unit_indicies) as-is and only change the module
path for wait_for_jobs_to_finish_async in the import statement.

In `@examples/job/run-simulations-and-extract-properties.ipynb`:
- Around line 147-150: The import currently pulls wait_for_jobs_to_finish_async
from mat3ra.notebooks.utils.other.api but that function lives in
mat3ra.notebooks.utils.other.job; update the import statements so
wait_for_jobs_to_finish_async is imported from mat3ra.notebooks.utils.other.job
and get_property_by_subworkflow_and_unit_indicies remains imported from
mat3ra.notebooks.utils.other.api (i.e., split the combined import into two
imports referencing the correct modules and keeping the same symbol names).

In `@examples/material/get_materials_by_formula.ipynb`:
- Around line 93-95: The notebook currently calls micropip.install with a
commit-specific dev wheel URL; replace that URL in the micropip.install(...)
call with a stable, versioned package source (preferably a PyPI package name and
version like "mat3ra-notebooks-utils==x.y.z" or a tagged release wheel URL) so
startup isn't tied to a commit hash or GitHub Pages path—update the argument to
micropip.install to use the released package identifier or a canonical artifact
URL instead of the commit-specific dev wheel.

In `@src/py/mat3ra/notebooks/utils/other/job.py`:
- Line 26: The polling functions (e.g., wait_for_jobs_to_finish_async and its
sync counterpart wait_for_jobs_to_finish) currently treat an empty job_ids list
as "no statuses" and can loop forever; add an early short-circuit at the start
of each function to immediately return True when job_ids is empty, and likewise
ensure any internal polling predicate that checks statuses returns True when the
input job_ids list is empty so the poll stops immediately; locate functions by
name (wait_for_jobs_to_finish_async, wait_for_jobs_to_finish and any internal
predicate that uses "statuses" from endpoint.get_job_statuses) and implement the
empty-list guard there.

In `@src/py/mat3ra/notebooks/utils/other/jupyterlite/__init__.py`:
- Around line 321-323: The code double-encodes Material JSON: in the block
handling instances of Material or MaterialWithBuildMetadata you call content =
content.to_json() (which returns a JSON string) and then json.dumps(content,
indent=4), producing escaped JSON; instead parse the JSON string back to a
Python object (e.g., via json.loads(content) or call to_dict if available) and
then call json.dumps on that object with indent=4 so the output is a properly
formatted JSON file; update the branch that touches
Material/MaterialWithBuildMetadata (the content.to_json() and subsequent
json.dumps call) to use json.loads(content) before re-encoding.

---

Nitpick comments:
In
`@examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`:
- Line 209: Remove the duplicate import of visualize_materials by deleting the
later occurrence "from mat3ra.notebooks.utils.other.visualize import
visualize_materials" (the one at line 209) so only the original import remains;
search for the symbol visualize_materials to confirm the remaining import (the
earlier one at line 113) covers all uses and remove the redundant statement.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5a0b97f3-46e1-4f0f-8bdc-7bd090fc10f6

📥 Commits

Reviewing files that changed from the base of the PR and between 768cca9 and 9a077c2.

📒 Files selected for processing (85)
  • config.yml
  • examples/job/create_and_submit_job.ipynb
  • examples/job/get-file-from-job.ipynb
  • examples/job/ml-train-model-predict-properties.ipynb
  • examples/job/run-simulations-and-extract-properties.ipynb
  • examples/material/create_material.ipynb
  • examples/material/get_materials_by_formula.ipynb
  • examples/material/upload_materials_from_file_poscar.ipynb
  • examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • examples/system/get_authentication_params.ipynb
  • examples/workflow/get_workflows.ipynb
  • examples/workflow/qe_scf_calculation.ipynb
  • other/jarvis/run_job_using_material_from_jarvis_db.ipynb
  • other/materials_designer/create_adatom_defect.ipynb
  • other/materials_designer/create_cluster_custom_shape.ipynb
  • other/materials_designer/create_cluster_specific_shape.ipynb
  • other/materials_designer/create_cluster_sphere.ipynb
  • other/materials_designer/create_cutout_box.ipynb
  • other/materials_designer/create_cutout_custom_shape.ipynb
  • other/materials_designer/create_grain_boundary_crystal.ipynb
  • other/materials_designer/create_grain_boundary_film.ipynb
  • other/materials_designer/create_heterostructure_example.ipynb
  • other/materials_designer/create_interface_with_min_strain_zsl.ipynb
  • other/materials_designer/create_interface_with_no_strain_matching.ipynb
  • other/materials_designer/create_interface_with_relaxation_ase_emt.ipynb
  • other/materials_designer/create_island_defect.ipynb
  • other/materials_designer/create_island_defect_custom.ipynb
  • other/materials_designer/create_maxwell_disorder.ipynb
  • other/materials_designer/create_monolayer.ipynb
  • other/materials_designer/create_nanoribbon.ipynb
  • other/materials_designer/create_nanowire.ipynb
  • other/materials_designer/create_nanowire_custom_shape.ipynb
  • other/materials_designer/create_perturbation.ipynb
  • other/materials_designer/create_perturbation_custom.ipynb
  • other/materials_designer/create_point_defect.ipynb
  • other/materials_designer/create_point_defect_pair.ipynb
  • other/materials_designer/create_slab.ipynb
  • other/materials_designer/create_supercell.ipynb
  • other/materials_designer/create_terrace_defect.ipynb
  • other/materials_designer/create_twisted_interface_with_commensurate_lattices.ipynb
  • other/materials_designer/create_twisted_interface_with_nanoribbons.ipynb
  • other/materials_designer/custom_transformation.ipynb
  • other/materials_designer/import_material_from_jarvis_db_entry.ipynb
  • other/materials_designer/import_materials_from_files.ipynb
  • other/materials_designer/import_materials_from_standata.ipynb
  • other/materials_designer/optimize_film_position.ipynb
  • other/materials_designer/passivate_edge.ipynb
  • other/materials_designer/specific_examples/custom_cutout_etched_silicon.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_2d_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_3d_fcc_metals_copper.ipynb
  • other/materials_designer/specific_examples/defect_point_interstitial_tin_oxide.ipynb
  • other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene_simulation.ipynb
  • other/materials_designer/specific_examples/defect_point_vacancy_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb
  • other/materials_designer/specific_examples/defect_surface_step_platinum.ipynb
  • other/materials_designer/specific_examples/heterostructure_silicon_silicon_dioxide_hafnium_dioxide_titanium_nitride.ipynb
  • other/materials_designer/specific_examples/interface_2d_2d_boron_nitride_graphene.ipynb
  • other/materials_designer/specific_examples/interface_2d_3d_graphene_silicon_dioxide.ipynb
  • other/materials_designer/specific_examples/interface_3d_3d_copper_cristobalite.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_commensurate_lattices_molybdenum_disulfide.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_nanoribbons_boron_nitride.ipynb
  • other/materials_designer/specific_examples/nanocluster_gold.ipynb
  • other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb
  • other/materials_designer/specific_examples/passivation_edge_nanowire_silicon.ipynb
  • other/materials_designer/specific_examples/passivation_surface_silicon.ipynb
  • other/materials_designer/specific_examples/perturbation_ripple_graphene.ipynb
  • other/materials_designer/specific_examples/slab_strontium_titanate.ipynb
  • other/materials_designer/under_the_hood.ipynb
  • other/materials_designer/workflows/band_gap.ipynb
  • other/materials_designer/workflows/band_structure.ipynb
  • other/materials_designer/workflows/band_structure_hse.ipynb
  • other/materials_designer/workflows/convergence.ipynb
  • other/materials_designer/workflows/equation_of_state.ipynb
  • other/materials_designer/workflows/relaxation.ipynb
  • other/materials_designer/workflows/total_energy.ipynb
  • other/materials_designer/workflows/total_energy_convergence.ipynb
  • other/materials_designer/workflows/total_energy_post_processing.ipynb
  • other/materials_designer/workflows/valence_band_offset.ipynb
  • pyproject.toml
  • src/py/mat3ra/notebooks/utils/other/api.py
  • src/py/mat3ra/notebooks/utils/other/job.py
  • src/py/mat3ra/notebooks/utils/other/jupyterlite/__init__.py
✅ Files skipped from review due to trivial changes (3)
  • other/materials_designer/under_the_hood.ipynb
  • other/materials_designer/workflows/relaxation.ipynb
  • other/materials_designer/workflows/convergence.ipynb
🚧 Files skipped from review as they are similar to previous changes (66)
  • examples/material/upload_materials_from_file_poscar.ipynb
  • other/materials_designer/create_monolayer.ipynb
  • other/materials_designer/import_material_from_jarvis_db_entry.ipynb
  • other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb
  • other/materials_designer/create_terrace_defect.ipynb
  • other/materials_designer/specific_examples/slab_strontium_titanate.ipynb
  • other/materials_designer/create_island_defect.ipynb
  • other/jarvis/run_job_using_material_from_jarvis_db.ipynb
  • other/materials_designer/create_slab.ipynb
  • other/materials_designer/specific_examples/perturbation_ripple_graphene.ipynb
  • other/materials_designer/create_grain_boundary_crystal.ipynb
  • other/materials_designer/create_cutout_box.ipynb
  • other/materials_designer/create_nanowire.ipynb
  • other/materials_designer/workflows/total_energy_convergence.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_3d_fcc_metals_copper.ipynb
  • examples/workflow/get_workflows.ipynb
  • other/materials_designer/create_cluster_sphere.ipynb
  • other/materials_designer/specific_examples/defect_planar_grain_boundary_2d_boron_nitride.ipynb
  • other/materials_designer/specific_examples/defect_point_interstitial_tin_oxide.ipynb
  • other/materials_designer/specific_examples/custom_cutout_etched_silicon.ipynb
  • other/materials_designer/workflows/total_energy_post_processing.ipynb
  • other/materials_designer/specific_examples/passivation_surface_silicon.ipynb
  • other/materials_designer/create_grain_boundary_film.ipynb
  • other/materials_designer/create_nanowire_custom_shape.ipynb
  • other/materials_designer/create_cluster_custom_shape.ipynb
  • other/materials_designer/specific_examples/defect_surface_adatom_graphene.ipynb
  • pyproject.toml
  • examples/material/create_material.ipynb
  • other/materials_designer/specific_examples/interface_bilayer_twisted_nanoribbons_boron_nitride.ipynb
  • other/materials_designer/optimize_film_position.ipynb
  • other/materials_designer/specific_examples/defect_point_pair_gallium_nitride.ipynb
  • other/materials_designer/create_heterostructure_example.ipynb
  • other/materials_designer/create_nanoribbon.ipynb
  • other/materials_designer/specific_examples/interface_2d_3d_graphene_silicon_dioxide.ipynb
  • other/materials_designer/passivate_edge.ipynb
  • other/materials_designer/workflows/band_structure_hse.ipynb
  • other/materials_designer/create_twisted_interface_with_commensurate_lattices.ipynb
  • examples/job/get-file-from-job.ipynb
  • other/materials_designer/import_materials_from_standata.ipynb
  • examples/reproducing_publications/band_gaps_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb
  • other/materials_designer/workflows/band_structure.ipynb
  • other/materials_designer/create_point_defect_pair.ipynb
  • other/materials_designer/workflows/valence_band_offset.ipynb
  • examples/workflow/qe_scf_calculation.ipynb
  • other/materials_designer/import_materials_from_files.ipynb
  • examples/system/get_authentication_params.ipynb
  • other/materials_designer/specific_examples/nanocluster_gold.ipynb
  • other/materials_designer/specific_examples/passivation_edge_nanowire_silicon.ipynb
  • other/materials_designer/specific_examples/heterostructure_silicon_silicon_dioxide_hafnium_dioxide_titanium_nitride.ipynb
  • other/materials_designer/create_perturbation.ipynb
  • other/materials_designer/specific_examples/interface_2d_2d_boron_nitride_graphene.ipynb
  • other/materials_designer/workflows/band_gap.ipynb
  • other/materials_designer/create_interface_with_no_strain_matching.ipynb
  • other/materials_designer/custom_transformation.ipynb
  • other/materials_designer/create_adatom_defect.ipynb
  • other/materials_designer/specific_examples/defect_surface_step_platinum.ipynb
  • other/materials_designer/specific_examples/defect_point_vacancy_boron_nitride.ipynb
  • other/materials_designer/workflows/total_energy.ipynb
  • other/materials_designer/specific_examples/interface_3d_3d_copper_cristobalite.ipynb
  • other/materials_designer/create_perturbation_custom.ipynb
  • other/materials_designer/create_point_defect.ipynb
  • other/materials_designer/specific_examples/defect_point_substitution_graphene_simulation.ipynb
  • other/materials_designer/create_interface_with_min_strain_zsl.ipynb
  • other/materials_designer/workflows/equation_of_state.ipynb
  • other/materials_designer/create_island_defect_custom.ipynb
  • config.yml

Comment on lines 137 to 141
"from mat3ra.notebooks.utils.other.api import (\n",
" copy_bank_workflow_by_system_name,\n",
" wait_for_jobs_to_finish_async,\n",
" get_property_by_subworkflow_and_unit_indicies,\n",
")\n",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for wait_for_jobs_to_finish_async definition in both modules
rg -n "def wait_for_jobs_to_finish_async" --type=py src/py/mat3ra/notebooks/utils/other/

Repository: Exabyte-io/api-examples

Length of output: 200


🏁 Script executed:

#!/bin/bash
# Check if wait_for_jobs_to_finish_async exists in api.py
rg -n "wait_for_jobs_to_finish_async" --type=py src/py/mat3ra/notebooks/utils/other/api.py

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

#!/bin/bash
# Extract the imports from the notebook file to verify the exact import path
cat examples/job/ml-train-model-predict-properties.ipynb | python3 << 'EOF'
import json
import sys

with open('examples/job/ml-train-model-predict-properties.ipynb', 'r') as f:
    notebook = json.load(f)
    
# Search for cells containing wait_for_jobs_to_finish_async
for i, cell in enumerate(notebook['cells']):
    if cell['cell_type'] == 'code':
        source = ''.join(cell['source'])
        if 'wait_for_jobs_to_finish_async' in source:
            print(f"Cell {i}:")
            print(source)
            print("---")
EOF

Repository: Exabyte-io/api-examples

Length of output: 1324


Fix import path for wait_for_jobs_to_finish_async.

The function wait_for_jobs_to_finish_async is defined in mat3ra.notebooks.utils.other.job but the notebook currently imports it from mat3ra.notebooks.utils.other.api, which will cause an ImportError at runtime.

🔧 Proposed fix
 from mat3ra.notebooks.utils import ENDPOINT_ARGS, ACCOUNT_ID
+from mat3ra.notebooks.utils.other.job import wait_for_jobs_to_finish_async
 from mat3ra.notebooks.utils.other.api import (
     copy_bank_workflow_by_system_name,
-    wait_for_jobs_to_finish_async,
     get_property_by_subworkflow_and_unit_indicies,
 )
🧰 Tools
🪛 Ruff (0.15.12)

[error] 137-137: await statement outside of a function

(F704)


[error] 137-137: await should be used within an async function

(PLE1142)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/job/ml-train-model-predict-properties.ipynb` around lines 137 - 141,
The notebook imports wait_for_jobs_to_finish_async from the wrong module; update
the import to pull wait_for_jobs_to_finish_async from
mat3ra.notebooks.utils.other.job instead of mat3ra.notebooks.utils.other.api so
the symbol resolves at runtime; keep the other imports
(copy_bank_workflow_by_system_name,
get_property_by_subworkflow_and_unit_indicies) as-is and only change the module
path for wait_for_jobs_to_finish_async in the import statement.

Comment on lines +147 to +150
"from mat3ra.notebooks.utils.other.api import (\n",
" wait_for_jobs_to_finish_async,\n",
" get_property_by_subworkflow_and_unit_indicies,\n",
")\n",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for wait_for_jobs_to_finish_async definition in both modules
rg -n "def wait_for_jobs_to_finish_async" --type=py src/py/mat3ra/notebooks/utils/other/

Repository: Exabyte-io/api-examples

Length of output: 200


🏁 Script executed:

# Search for get_property_by_subworkflow_and_unit_indicies definition
rg -n "def get_property_by_subworkflow_and_unit_indicies" --type=py src/py/mat3ra/notebooks/utils/other/

Repository: Exabyte-io/api-examples

Length of output: 164


🏁 Script executed:

# Check the actual notebook file to confirm the import statement
cd examples/job && python3 << 'EOF'
import json
with open('run-simulations-and-extract-properties.ipynb', 'r') as f:
    nb = json.load(f)
    for i, cell in enumerate(nb['cells']):
        if cell['cell_type'] == 'code':
            source = ''.join(cell['source'])
            if 'wait_for_jobs_to_finish_async' in source:
                print(f"Cell {i}:")
                print(source)
                print("---")
EOF

Repository: Exabyte-io/api-examples

Length of output: 1078


Fix import path for wait_for_jobs_to_finish_async.

The function wait_for_jobs_to_finish_async is defined in mat3ra.notebooks.utils.other.job, not mat3ra.notebooks.utils.other.api. This import will fail at runtime when the function is called. Split the imports to import wait_for_jobs_to_finish_async from the job module and get_property_by_subworkflow_and_unit_indicies from the api module.

Proposed fix
 from mat3ra.notebooks.utils import ENDPOINT_ARGS, ACCOUNT_ID
-from mat3ra.notebooks.utils.other.api import (
-    wait_for_jobs_to_finish_async,
-    get_property_by_subworkflow_and_unit_indicies,
-)
+from mat3ra.notebooks.utils.other.job import wait_for_jobs_to_finish_async
+from mat3ra.notebooks.utils.other.api import get_property_by_subworkflow_and_unit_indicies
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/job/run-simulations-and-extract-properties.ipynb` around lines 147 -
150, The import currently pulls wait_for_jobs_to_finish_async from
mat3ra.notebooks.utils.other.api but that function lives in
mat3ra.notebooks.utils.other.job; update the import statements so
wait_for_jobs_to_finish_async is imported from mat3ra.notebooks.utils.other.job
and get_property_by_subworkflow_and_unit_indicies remains imported from
mat3ra.notebooks.utils.other.api (i.e., split the combined import into two
imports referencing the correct modules and keeping the same symbol names).

Comment on lines +93 to +95
" await micropip.install(\n",
" \"https://exabyte-io.github.io/api-examples/mat3ra_notebooks_utils-0.1.dev1+g1c38b9566-py3-none-any.whl\"\n",
" )\n",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify that the hardcoded wheel URL in the notebook is still reachable.
NB_PATH="$(fd -a 'get_materials_by_formula.ipynb' | head -n1)"

python - <<'PY'
import json, re, pathlib, sys
nb_path = pathlib.Path("""'"$NB_PATH"'""")
nb = json.loads(nb_path.read_text())
urls = []
for cell in nb.get("cells", []):
    if cell.get("cell_type") != "code":
        continue
    for line in cell.get("source", []):
        m = re.search(r'https?://[^"\']+mat3ra_notebooks_utils[^"\']+\.whl', line)
        if m:
            urls.append(m.group(0))
if not urls:
    print("No mat3ra_notebooks_utils wheel URLs found.")
    sys.exit(1)
for u in urls:
    print(u)
PY

# Check HTTP status for each discovered URL (expect 200).
python - <<'PY'
import json, re, pathlib, subprocess, sys
from urllib.parse import urlparse

nb_path = pathlib.Path("""'"$NB_PATH"'""")
nb = json.loads(nb_path.read_text())
urls = []
for cell in nb.get("cells", []):
    if cell.get("cell_type") != "code":
        continue
    for line in cell.get("source", []):
        m = re.search(r'https?://[^"\']+mat3ra_notebooks_utils[^"\']+\.whl', line)
        if m:
            urls.append(m.group(0))

failed = False
for u in urls:
    code = subprocess.check_output(
        ["curl", "-sS", "-o", "/dev/null", "-w", "%{http_code}", "-I", u],
        text=True
    ).strip()
    print(f"{code}  {u}")
    if code != "200":
        failed = True

if failed:
    sys.exit(2)
PY

Repository: Exabyte-io/api-examples

Length of output: 608


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find the notebook file
NB_FILE=$(find . -name "get_materials_by_formula.ipynb" -type f | head -n1)
echo "Found notebook: $NB_FILE"

# Extract and display wheel URLs using Python
python3 <<'EOF'
import json
import re

nb_file = None
import os
for root, dirs, files in os.walk('.'):
    if 'get_materials_by_formula.ipynb' in files:
        nb_file = os.path.join(root, 'get_materials_by_formula.ipynb')
        break

if nb_file:
    print(f"Notebook path: {nb_file}")
    with open(nb_file, 'r') as f:
        nb = json.load(f)
    
    urls = []
    for cell in nb.get('cells', []):
        if cell.get('cell_type') == 'code':
            source = cell.get('source', [])
            if isinstance(source, list):
                source = ''.join(source)
            # Find wheel URLs
            matches = re.findall(r'https?://[^\s"\']+mat3ra_notebooks_utils[^\s"\']+\.whl', source)
            urls.extend(matches)
    
    if urls:
        print("\nFound wheel URLs:")
        for url in urls:
            print(f"  {url}")
    else:
        print("No mat3ra_notebooks_utils wheel URLs found")
else:
    print("Notebook file not found")
EOF

Repository: Exabyte-io/api-examples

Length of output: 323


🏁 Script executed:

#!/bin/bash

URL="https://exabyte-io.github.io/api-examples/mat3ra_notebooks_utils-0.1.dev1+g1c38b9566-py3-none-any.whl"

echo "Testing URL accessibility..."
HTTP_CODE=$(curl -sS -o /dev/null -w "%{http_code}" -I "$URL" 2>&1 || echo "ERROR")
echo "HTTP Status for URL: $HTTP_CODE"

# Also test if we can get the content size
if [ "$HTTP_CODE" = "200" ]; then
    SIZE=$(curl -sS -I "$URL" 2>&1 | grep -i content-length | awk '{print $2}')
    echo "Content-Length: $SIZE"
fi

Repository: Exabyte-io/api-examples

Length of output: 144


Prefer stable package versions over commit-specific dev wheels.

The current wheel URL on line 94 relies on a commit-specific dev version pinned to GitHub Pages. While the URL is currently accessible, using versioned releases or a centrally managed package source would make notebook startup more resilient to future artifact rotation or repository reorganization.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/material/get_materials_by_formula.ipynb` around lines 93 - 95, The
notebook currently calls micropip.install with a commit-specific dev wheel URL;
replace that URL in the micropip.install(...) call with a stable, versioned
package source (preferably a PyPI package name and version like
"mat3ra-notebooks-utils==x.y.z" or a tagged release wheel URL) so startup isn't
tied to a commit hash or GitHub Pages path—update the argument to
micropip.install to use the released package identifier or a canonical artifact
URL instead of the commit-specific dev wheel.



@interruptible_polling_loop()
def wait_for_jobs_to_finish_async(endpoint: JobEndpoints, job_ids: List[str]) -> bool:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Short-circuit empty job lists before polling.

For job_ids=[], Line 49 evaluates to True via not statuses, so the polling predicate can continue indefinitely instead of finishing immediately.

💡 Suggested fix
 `@interruptible_polling_loop`()
 def wait_for_jobs_to_finish_async(endpoint: JobEndpoints, job_ids: List[str]) -> bool:
+    if not job_ids:
+        return False
+
     statuses = get_jobs_statuses_by_ids(endpoint, job_ids)

Also applies to: 35-35, 49-49

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks/utils/other/job.py` at line 26, The polling functions
(e.g., wait_for_jobs_to_finish_async and its sync counterpart
wait_for_jobs_to_finish) currently treat an empty job_ids list as "no statuses"
and can loop forever; add an early short-circuit at the start of each function
to immediately return True when job_ids is empty, and likewise ensure any
internal polling predicate that checks statuses returns True when the input
job_ids list is empty so the poll stops immediately; locate functions by name
(wait_for_jobs_to_finish_async, wait_for_jobs_to_finish and any internal
predicate that uses "statuses" from endpoint.get_job_statuses) and implement the
empty-list guard there.

Comment thread src/py/mat3ra/notebooks_utils/other/jupyterlite/__init__.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

♻️ Duplicate comments (8)
src/py/mat3ra/notebooks_utils/python/other/plot/_plotly.py (1)

146-149: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Use z-value at (x_opt, y_opt), not global minimum.

The marker height is still computed from np.min(z_matrix), so it can be placed at the wrong location.

Proposed fix
     if optimal_point is not None:
         x_opt, y_opt = optimal_point
-        z_opt = np.min(z_matrix)
+        x_idx = np.argmin(np.abs(x_matrix[0, :] - x_opt))
+        y_idx = np.argmin(np.abs(y_matrix[:, 0] - y_opt))
+        z_opt = z_matrix[y_idx, x_idx]
         fig.add_trace(
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_plotly.py` around lines 146
- 149, The marker z is being set to the global minimum (z_opt =
np.min(z_matrix)) instead of the z at the selected (x_opt, y_opt); replace that
line with logic that finds the corresponding z value in the grid used to build
z_matrix (e.g., locate the nearest indices by comparing x_opt/y_opt to the x and
y coordinate arrays used to create z_matrix — often named x_vals/x_coords and
y_vals/y_coords or produced by np.meshgrid — then set z_opt = z_matrix[y_idx,
x_idx]; if you need sub-grid accuracy, perform a 2D interpolation of z_matrix at
(x_opt, y_opt) instead). Ensure you reference the same coordinate arrays used
when z_matrix was created so the marker is placed at the correct height.
src/py/mat3ra/notebooks_utils/logger.py (1)

27-27: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Parse VERBOSE env var explicitly as boolean.

Current fallback treats non-empty strings like "False" as truthy, so logging can be unintentionally enabled.

💡 Suggested fix
-            should_log = caller_globals.get("VERBOSE", os.environ.get("VERBOSE", True))
+            env_verbose = os.environ.get("VERBOSE")
+            if env_verbose is None:
+                default_verbose = True
+            else:
+                default_verbose = env_verbose.strip().lower() in {"1", "true", "yes", "on"}
+            should_log = caller_globals.get("VERBOSE", default_verbose)
#!/bin/bash
# Verify current VERBOSE fallback expression in logger implementation.
rg -n -C2 --type=py 'caller_globals\.get\("VERBOSE", os\.environ\.get\("VERBOSE", True\)\)' src/py/mat3ra/notebooks_utils/logger.py
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/logger.py` at line 27, The current fallback
treats the VERBOSE env var as a truthy string; change the should_log assignment
so the environment value is parsed explicitly as a boolean before falling back
to caller_globals: replace the caller_globals.get("VERBOSE",
os.environ.get("VERBOSE", True)) usage with a boolean parse of
os.environ.get("VERBOSE", "True") (e.g., use a small helper or
distutils.util.strtobool and map to bool, or test lower() in
("1","true","yes","y")) and then call caller_globals.get("VERBOSE",
parsed_bool); update imports or add the helper function and ensure the variable
named should_log uses this parsed boolean.
src/py/mat3ra/notebooks_utils/python/core/json.py (1)

40-47: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Harden JSON rendering against script injection.

Raw JSON is interpolated into executable <script> content. A payload containing </script> can break out and execute arbitrary JS.

🛡️ Suggested hardening
-        if isinstance(obj, (dict, list)):
-            json_str = json.dumps(obj)
-        else:
-            json_str = obj
-
-        id = str(uuid.uuid4())
+        json_str = json.dumps(obj).replace("</", "<\\/")
+        container_id = str(uuid.uuid4())

@@
-        display(HTML(f'<style>{css}</style><div id="{id}"></div>'))
+        display(HTML(f'<style>{css}</style><div id="{container_id}"></div>'))
         display(
             HTML(
                 f"<script>{js} "
                 f"renderjson.set_show_to_level({str(level)}); "
                 f'renderjson.set_icons("▸","▾"); '
-                f'document.getElementById("{id}").appendChild(renderjson({json_str}))</script>'
+                f"const data = JSON.parse({json.dumps(json_str)}); "
+                f'document.getElementById("{container_id}").appendChild(renderjson(data));</script>'
             )
         )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/core/json.py` around lines 40 - 47, The
code currently interpolates raw json_str into an executable <script>, allowing a
payload like </script> to break out; instead, emit the JSON into a
non-executable container and read it from DOM before calling renderjson: create
a separate element (for example a <script type="application/json"
id="{id}_data"> or a <div hidden id="{id}_data">) that contains json_str as
textContent, then in the existing script code call
JSON.parse(document.getElementById("{id}_data").textContent) and pass that
resulting object to renderjson (use the existing id and renderjson references),
removing any direct interpolation of json_str into the executable script.
src/py/mat3ra/notebooks_utils/jupyterlite/url.py (3)

48-65: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fix incomplete docstring and handle unexpected ENVIRONMENT values.

The function has multiple issues previously flagged:

  1. Incomplete docstring at Line 54
  2. Can return None if ENVIRONMENT doesn't match either enum value (violates return type)
  3. Wrong return type in docstring (Line 57 says str instead of Union[str, bytes])
📝 Proposed fix
 async def read_from_url(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Read content from a URL, handling both Python and Pyodide environments.

     Args:
         url (str): The URL to fetch from.
-        as_bytes (bool): Whether to return the content as
+        as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        ValueError: If the environment is not recognized.
     """
-    response = None
     if ENVIRONMENT == EnvironmentsEnum.PYODIDE:
-        response = await read_from_url_pyodide(url, as_bytes)
+        return await read_from_url_pyodide(url, as_bytes)
     elif ENVIRONMENT == EnvironmentsEnum.PYTHON:
-        response = read_from_url_python(url, as_bytes)
-
-    return response
+        return read_from_url_python(url, as_bytes)
+    else:
+        raise ValueError(f"Unsupported environment: {ENVIRONMENT}")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/url.py` around lines 48 - 65, The
read_from_url function has an incomplete/incorrect docstring and can return None
if ENVIRONMENT is not handled; update the docstring to fully describe parameters
and return type as Union[str, bytes], and ensure the function always returns a
Union[str, bytes> by adding an explicit fallback branch that raises a clear
exception (or returns a safe default) when ENVIRONMENT is neither
EnvironmentsEnum.PYODIDE nor EnvironmentsEnum.PYTHON; reference the function
name read_from_url and the helpers read_from_url_pyodide, read_from_url_python
and the EnvironmentsEnum values when implementing the fix.

7-24: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add URL scheme validation, timeout, and error handling.

The urlopen call on Line 20 has multiple security and reliability issues:

  1. Accepts any URL scheme (including file://, ftp://), creating SSRF risk
  2. No timeout, can hang indefinitely
  3. No error handling for network failures

All three issues were previously flagged and remain unresolved.

🔒 Proposed fix
 def read_from_url_python(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Fetch, read, and decode content from a URL in a Python environment.

     Args:
         url (str): The URL to fetch from.
         as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        ValueError: If the URL scheme is not supported.
+        urllib.error.URLError: If the URL cannot be fetched.
     """
     import urllib.request
+    from urllib.error import URLError
+    from urllib.parse import urlparse

+    parsed = urlparse(url)
+    if parsed.scheme not in ("http", "https"):
+        raise ValueError(f"Unsupported URL scheme: {parsed.scheme}. Only http and https are allowed.")

-    with urllib.request.urlopen(url) as response:
+    try:
+        with urllib.request.urlopen(url, timeout=30) as response:
+            body = response.read()
+            if as_bytes:
+                return body
+            return body.decode("utf-8")
+    except URLError as e:
+        raise URLError(f"Failed to fetch URL {url}: {e}") from e
-        body = response.read()
-        if as_bytes:
-            return body
-        return body.decode("utf-8")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/url.py` around lines 7 - 24, The
read_from_url_python function currently calls urllib.request.urlopen without
validating the URL scheme, without a timeout, and without error handling; update
read_from_url_python to first parse and validate the URL scheme (only allow
"http" and "https"), call urlopen with a sensible timeout (e.g., pass a timeout
parameter) when opening the URL, and wrap the network call in try/except to
catch urllib.error.URLError, HTTPError, ValueError (from invalid URL parsing)
and socket.timeout, logging or re-raising a clear exception; ensure the function
still returns bytes when as_bytes is True and decoded text otherwise, and
reference read_from_url_python, the urlopen call, and the body/response
variables when locating the change.

27-45: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fix docstring and add error handling for fetch failures.

The function lacks error handling for network errors and has an incorrect return type in the docstring (Line 36 says str but should be Union[str, bytes]).

This was previously flagged and remains unresolved.

🛡️ Proposed fix
 async def read_from_url_pyodide(url: str, as_bytes: bool = False) -> Union[str, bytes]:
     """
     Fetch and read content from a URL in a Pyodide environment.

     Args:
         url (str): The URL to fetch from.
         as_bytes (bool): Whether to return the content as bytes.

     Returns:
-        str: The content as a string or bytes.
+        Union[str, bytes]: The content as a string or bytes.
+
+    Raises:
+        Exception: If the fetch operation fails.
     """
     # `http` is a Pyodide module that will be installed in the Pyodide environment by default.
     from pyodide.http import pyfetch  # type: ignore

-    # Per https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
-    response = await pyfetch(url)
-    if as_bytes:
-        return await response.bytes()
-    return await response.string()
+    try:
+        # Per https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
+        response = await pyfetch(url)
+        if not response.ok:
+            raise Exception(f"HTTP {response.status}: {response.status_text}")
+        if as_bytes:
+            return await response.bytes()
+        return await response.string()
+    except Exception as e:
+        raise Exception(f"Failed to fetch URL {url}: {e}") from e
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/url.py` around lines 27 - 45, The
read_from_url_pyodide function's docstring return type is wrong and it lacks
fetch error handling; update the docstring to state "Union[str, bytes]" as the
return type and add network/error handling around the pyfetch call in
read_from_url_pyodide: after awaiting pyfetch(url) check response.ok (or
equivalent status) and raise a descriptive exception (include response.status
and url) on non-OK responses, and wrap the pyfetch/response reading in a
try/except to catch and re-raise any fetch/IO exceptions with context so callers
get useful error messages.
src/py/mat3ra/notebooks_utils/python/other/api/job.py (1)

26-50: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Short-circuit empty job lists before polling.

For job_ids=[], Line 50 evaluates not statuses to True, causing the polling predicate to continue indefinitely instead of stopping immediately.

Add an early return at the start of the function.

💡 Suggested fix
 `@interruptible_polling_loop`()
 def wait_for_jobs_to_finish_async(endpoint: JobEndpoints, job_ids: List[str]) -> bool:
     """
     Waits for jobs to finish and prints their statuses.
     A job is considered finished if it is not in "pre-submission", "submitted", or, "active" status.

     Args:
         endpoint (JobEndpoints): Job endpoint object from the Exabyte API Client
         job_ids (list): list of job IDs to wait for
     """
+    if not job_ids:
+        return False
+
     statuses = get_jobs_statuses_by_ids(endpoint, job_ids)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/api/job.py` around lines 26 - 50,
The polling predicate in wait_for_jobs_to_finish_async incorrectly treats an
empty job_ids list as "continue" because it relies on statuses later; add an
early short-circuit: if not job_ids: return False at the top of
wait_for_jobs_to_finish_async so the function stops immediately when job_ids is
empty (use the existing job_ids parameter and return a boolean consistent with
the predicate contract).
src/py/mat3ra/notebooks_utils/jupyterlite/interrupts.py (1)

164-169: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix potential negative remaining_seconds.

The arithmetic remaining_seconds -= check_interval_seconds can cause remaining_seconds to become slightly negative when check_interval_seconds is larger than the remaining time, leading to one extra small sleep.

🐛 Proposed fix
             remaining_seconds = float(poll_interval_seconds)
             while remaining_seconds > 0:
                 if broadcast_channel_abort_controller.is_aborted:
                     raise UserAbortError("Aborted by user.")
-                await asyncio.sleep(min(check_interval_seconds, remaining_seconds))
-                remaining_seconds -= check_interval_seconds
+                sleep_duration = min(check_interval_seconds, remaining_seconds)
+                await asyncio.sleep(sleep_duration)
+                remaining_seconds -= sleep_duration
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/interrupts.py` around lines 164 -
169, The loop may make remaining_seconds slightly negative because it always
subtracts check_interval_seconds; change the logic in the loop inside wait for
interrupt so you compute sleep_duration = min(check_interval_seconds,
remaining_seconds) (or subtract by that same min) and then await
asyncio.sleep(sleep_duration) and decrement remaining_seconds by sleep_duration;
keep the existing abort check against
broadcast_channel_abort_controller.is_aborted and raise UserAbortError as before
so the loop never produces a negative remaining_seconds value.
🧹 Nitpick comments (12)
examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb (1)

208-208: 💤 Low value

Remove duplicate import.

visualize_materials is already imported at Line 112. This duplicate import is unnecessary.

♻️ Proposed fix
-from mat3ra.notebooks_utils.other.visualize import visualize_materials
-
 for interface in interfaces:
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`
at line 208, Remove the duplicate import of visualize_materials: locate the
redundant import statement "from mat3ra.notebooks_utils.other.visualize import
visualize_materials" (the one around line 208) and delete it so only the
original import (the one at line ~112) remains; ensure no other uses or aliases
rely on the removed line and run tests/notebook to confirm imports still
resolve.
src/py/mat3ra/notebooks_utils/__init__.py (1)

17-31: 💤 Low value

Consider sorting __all__ for consistency.

Static analysis suggests applying isort-style sorting to the __all__ list.

♻️ Proposed sort order
 __all__ = [
-    "display_JSON",
-    "absolute_path_to_settings_json_file",
-    "settings_json_config",
     "ACCOUNT_ID",
     "AUTH_TOKEN",
-    "MATERIALS_PROJECT_API_KEY",
-    "ORGANIZATION_ID",
-    "PORT",
-    "SECURE",
-    "VERSION",
+    "ENDPOINT_ARGS",
     "HOST",
-    "ENDPOINT_ARGS",
+    "MATERIALS_PROJECT_API_KEY",
+    "ORGANIZATION_ID",
+    "PORT",
+    "SECURE",
     "UPLOADS_FOLDER",
+    "VERSION",
+    "absolute_path_to_settings_json_file",
+    "display_JSON",
+    "settings_json_config",
 ]

As per coding guidelines: Ruff (RUF022) – Apply an isort-style sorting to __all__.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/__init__.py` around lines 17 - 31, The __all__
list is unsorted and triggers RUF022; reorder the entries in the module-level
__all__ to an isort-style alphabetical order (e.g., alphabetize the exported
names such as ACCOUNT_ID, AUTH_TOKEN, ENDPOINT_ARGS, HOST,
MATERIALS_PROJECT_API_KEY, NODE names like PORT/SECURE, etc.) so that the
__all__ list is consistently sorted; locate the __all__ definition in this
module and replace its current order with the alphabetically sorted list of the
exported symbols.
other/materials_designer/create_nanowire_custom_shape.ipynb (1)

49-49: ⚖️ Poor tradeoff

Consider parameterizing the wheel URL.

The wheel URL is hardcoded with a specific git hash (g1e019a2ab). This requires manual updates on every commit and appears in multiple notebooks.

Consider either:

  • Using a build-time template variable that gets substituted during notebook generation
  • Fetching the latest wheel URL dynamically from a stable endpoint
  • Using a version-tagged release URL instead of a git-hash-specific build

This pattern also appears in:

  • passivate_slab.ipynb (line 97)
  • import_material_from_jarvis_db_entry.ipynb (line 70)
  • interface_2d_3d_graphene_silicon_dioxide.ipynb (line 88)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@other/materials_designer/create_nanowire_custom_shape.ipynb` at line 49, The
notebook currently hardcodes the wheel URL inside the micropip.install call (the
string 'https://exabyte-io.github.io/...g1e019a2ab-py3-none-any.whl'); replace
this literal with a parameterized value and load that value at notebook
generation or runtime (e.g., a template variable like WHEEL_URL or a config/env
var) or alternatively implement a small helper that resolves a stable,
version-tagged release or queries a stable endpoint for the latest wheel URL
before calling micropip.install; update the micropip.install invocation in
create_nanowire_custom_shape.ipynb and the same pattern in passivate_slab.ipynb,
import_material_from_jarvis_db_entry.ipynb, and
interface_2d_3d_graphene_silicon_dioxide.ipynb to use the new parameter or
resolution helper (look for the micropip.install call and the hardcoded URL
string to locate changes).
src/py/mat3ra/notebooks_utils/python/other/plot/_matplotlib.py (1)

47-83: 💤 Low value

Consider validating required keys in series items.

Lines 72-73 will raise KeyError if x_key or y_key are missing from any item in series. Adding validation would provide clearer error messages.

♻️ Example validation
 def plot_series(
     series: List[Dict],
     x_key: str,
     y_key: str,
     xlabel: str,
     ylabel: str,
     title: str,
     figsize: Tuple[int, int] = (8, 5),
     marker: str = "o",
     rotation: int = 45,
 ) -> None:
     """
     Plot a series of data points with configurable parameters.

     Args:
         series: List of dictionaries containing data points.
         x_key: Key to extract x values from series items.
         y_key: Key to extract y values from series items.
         xlabel: Label for x-axis.
         ylabel: Label for y-axis.
         title: Title of the plot.
         figsize: Size of the figure.
         marker: Marker style for data points.
         rotation: Rotation angle for x-axis labels.
     """
+    # Validate keys exist in all items
+    for i, item in enumerate(series):
+        if x_key not in item:
+            raise ValueError(f"Item {i} missing required key '{x_key}'")
+        if y_key not in item:
+            raise ValueError(f"Item {i} missing required key '{y_key}'")
+
     x_labels = [str(item[x_key]) for item in series]
     y_values = [item[y_key] for item in series]
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_matplotlib.py` around lines
47 - 83, In plot_series, validate that each item in the series is a mapping
containing both x_key and y_key before building x_labels/y_values; iterate over
series (use the existing parameter names series, x_key, y_key) and if an item is
not a dict/mapping or is missing a key raise a clear ValueError identifying the
missing key and the item index (or normalize/filter with a logged skip), so that
lines building x_labels = [...] and y_values = [...] cannot raise an unhelpful
KeyError.
src/py/mat3ra/notebooks_utils/jupyterlite/ui.py (1)

6-28: 💤 Low value

Use the assigned variable for consistency.

Line 27 prints array[index] after assigning result = array[index] on Line 26. For consistency and to avoid redundant indexing, use the assigned variable.

♻️ Proposed fix
     result = array[index]
-    print(f"Selected {element_name}: ", array[index])
+    print(f"Selected {element_name}: ", result)
     return result
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/ui.py` around lines 6 - 28, In
ui_prompt_select_array_element_by_index_pyodide, after assigning result =
array[index] you should use that variable in the print statement for consistency
and to avoid redundant indexing; replace print(f"Selected {element_name}: ",
array[index]) with print(f"Selected {element_name}: ", result) so the function
prints the already-computed result variable instead of re-indexing the array.
src/py/mat3ra/notebooks_utils/python/io.py (2)

35-50: ⚡ Quick win

Add validation for expected data structure.

The function assumes value is iterable and each item has a "name" key (Line 46), but provides no validation or error handling. If the input doesn't match this structure, the function will raise a cryptic KeyError or TypeError.

🛡️ Proposed fix with validation
-def set_data_python(key: str, value: Any):
+def set_data_python(key: str, value: List[Dict[str, Any]]):
     """
     Write data to the `uploads` folder in a JupyterLab environment.

     Args:
         key (str): The name under which data will be written.
-        value (Any): The value to write to the `uploads` folder.
+        value (List[Dict[str, Any]]): List of items with 'name' keys to write.
     """
     if not os.path.exists(UPLOADS_FOLDER):
         os.makedirs(UPLOADS_FOLDER)
     for item in value:
+        if not isinstance(item, dict) or "name" not in item:
+            raise ValueError(f"Each item must be a dict with a 'name' key, got: {item}")
         safe_name = item["name"].replace("%", "pct").replace("/", ":")
         file_path = os.path.join(UPLOADS_FOLDER, f"{safe_name}.json")
         with open(file_path, "w") as file:
             json.dump(item, file)
         log(f"Data for {key} written to {file_path}")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/io.py` around lines 35 - 50, The
set_data_python function assumes 'value' is iterable of dicts with a "name" key
and lacks validation; add input checks at the start of set_data_python to ensure
value is an iterable (not a string) and each item is a mapping that contains a
non-empty "name" string, and handle invalid items by either skipping with a
warning via log(...) or raising a clear ValueError; when iterating, wrap the
per-item logic that constructs safe_name and writes to UPLOADS_FOLDER in a
try/except to catch TypeError/KeyError/IOError and log the offending item and
error so errors are actionable.

9-33: ⚡ Quick win

Add return type annotation and use consistent logging.

The function returns either a list or None, but the return type annotation is missing. Additionally, Line 32 uses print() while Line 25 uses log(), creating inconsistent output handling.

♻️ Proposed fix
+from typing import Any, Dict, List, Optional
+
-def get_data_python(key: str, globals_dict: Optional[Dict] = None):
+def get_data_python(key: str, globals_dict: Optional[Dict] = None) -> Optional[List]:
     """
     Read data from the `uploads` folder in a JupyterLab environment.

     Args:
         key (str): The name under which data is expected to be received.
         globals_dict (dict, optional): A dictionary to store the received data. Defaults to None.
+
+    Returns:
+        Optional[List]: List of data from JSON files, or None if folder not found.
     """
     try:
         data_from_host = []
         index = 0
         for filename in sorted(os.listdir(UPLOADS_FOLDER)):
             if filename.endswith(".json"):
                 with open(os.path.join(UPLOADS_FOLDER, filename), "r") as file:
                     data = json.load(file)
                 name = os.path.splitext(filename)[0]
                 log(f"{index}: {name}")
                 index += 1
                 data_from_host.append(data)
         if globals_dict is not None:
             globals_dict[key] = data_from_host
         return data_from_host
     except FileNotFoundError:
-        print("No data found in the 'uploads' folder.")
+        log("No data found in the 'uploads' folder.")
+        return None
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/io.py` around lines 9 - 33, The function
get_data_python lacks a return type and mixes logging functions; update its
signature to include a proper return annotation (e.g., -> Optional[List[Any]] or
-> Optional[List[Dict[str, Any]]]) and import any required typing names, replace
the print("No data found...") in the except FileNotFoundError block with the
same logger used elsewhere (log(...)) and ensure the except returns None
explicitly; keep the rest of the logic and the use of globals_dict[key] intact.
src/py/mat3ra/notebooks_utils/ui.py (1)

10-25: 💤 Low value

Add select_coordination_threshold to __all__ for explicit export.

The module defines select_coordination_threshold as a public async function (Line 10) but only exports dataframe_to_html in __all__ (Line 25). If select_coordination_threshold is intended to be part of the public API, it should be included in __all__.

♻️ Proposed fix (if function should be public)
-__all__ = ["dataframe_to_html"]
+__all__ = ["dataframe_to_html", "select_coordination_threshold"]
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/ui.py` around lines 10 - 25, The module defines
an exported public async function select_coordination_threshold (which delegates
to select_coordination_threshold_emscripten or
select_coordination_threshold_python) but __all__ only lists
"dataframe_to_html"; add "select_coordination_threshold" to the __all__ list so
the function is explicitly exported as part of the public API, e.g., update the
__all__ assignment to include "select_coordination_threshold" alongside
"dataframe_to_html".
other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb (1)

36-45: ⚖️ Poor tradeoff

Hardcoded wheel URL will become stale.

Line 41 contains a hardcoded wheel URL with a specific git commit hash (g1e019a2ab). This will require manual updates for each new build and may break if the URL changes or the artifact is removed.

Consider using a version tag, environment variable, or a stable release URL instead of a git-hash-specific development wheel.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb`
around lines 36 - 45, The emscripten install block currently hardcodes a wheel
URL with a git-commit hash in the await micropip.install call (the string passed
to micropip.install inside the sys.platform == "emscripten" branch), which will
become stale; change that call to resolve a stable artifact (use a versioned tag
or PyPI package name, or derive the URL from an environment variable like
MAT3RA_UTILS_WHEEL or from a build-time variable) and update any callers such as
await install_packages("specific_examples|made") to rely on that stable name so
the notebook no longer depends on a commit-specific dev wheel URL.
src/py/mat3ra/notebooks_utils/material.py (3)

146-148: ⚡ Quick win

Bare exception catch can hide bugs.

Same issue as Line 39–43: catching all exceptions without specificity or logging makes failures silent.

🛡️ Proposed improvement
             try:
                 resulting_material = MaterialWithBuildMetadata.create(data)
-            except Exception:
+            except (KeyError, ValueError, TypeError) as e:
+                log(f"Failed to create MaterialWithBuildMetadata: {e}, falling back to Material", SeverityLevelEnum.WARNING)
                 resulting_material = Material.create(data)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/material.py` around lines 146 - 148, The
current bare except around MaterialWithBuildMetadata.create hides errors; change
it to catch only expected exceptions or at minimum capture the exception object
and log it before falling back to Material.create. Replace the broad "except
Exception:" with either a specific exception type you expect from
MaterialWithBuildMetadata.create (e.g., ValueError/TypeError) or "except
Exception as e:" and call the module/class logger (or process logger) to record
e and a contextual message, then call Material.create(data) as the fallback;
reference MaterialWithBuildMetadata.create and Material.create to locate the
code.

39-43: ⚡ Quick win

Bare exception catch can hide bugs.

Catching Exception without logging or re-raising specific cases makes debugging difficult when MaterialWithBuildMetadata.create fails for unexpected reasons (e.g., schema changes, corrupt data).

🛡️ Proposed improvement
         for item in globals_dict["materials_in"]:
             try:
                 materials.append(MaterialWithBuildMetadata.create(item))
-            except Exception:
+            except (KeyError, ValueError, TypeError) as e:
+                log(f"Failed to create MaterialWithBuildMetadata: {e}, falling back to Material", SeverityLevelEnum.WARNING)
                 materials.append(Material.create(item))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/material.py` around lines 39 - 43, Replace the
bare except in the loop that calls MaterialWithBuildMetadata.create so failures
aren't silently swallowed: catch and handle only the expected error types (e.g.,
ValueError, ValidationError) or catch Exception but log the exception details
before falling back to Material.create; refer to the loop using
globals_dict["materials_in"], the MaterialWithBuildMetadata.create call, and the
fallback Material.create so you add structured logging (including the exception
message/trace) and avoid masking unexpected bugs.

37-37: 💤 Low value

Consider using dict.get() for cleaner code.

The if "materials_in" in globals_dict and globals_dict["materials_in"] check can be simplified to use .get() with a truthiness check.

♻️ Proposed refactor
-    if "materials_in" in globals_dict and globals_dict["materials_in"]:
+    if globals_dict.get("materials_in"):
         materials = []
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/material.py` at line 37, Replace the verbose
presence-and-truthiness check with a .get() call: change the conditional that
currently reads checking "materials_in" in globals_dict and then indexing
globals_dict["materials_in"] to a single truthiness check using
globals_dict.get("materials_in"); update any related branches that rely on that
condition to use the same symbol (globals_dict and the "materials_in" key) to
keep behavior identical.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@other/materials_designer/workflows/surface_energy.ipynb`:
- Around line 513-515: Update the import for wait_for_jobs_to_finish_async to
the correct module path: replace the broken import "from
mat3ra.notebooks_utils.other.api import create_job,
wait_for_jobs_to_finish_async" so that wait_for_jobs_to_finish_async is imported
from mat3ra.notebooks_utils.python.other.api (while keeping create_job import
location intact if unchanged); ensure the import line references
wait_for_jobs_to_finish_async by name and that other imports (dict_to_namespace,
display_JSON) remain untouched.

In `@src/py/mat3ra/notebooks_utils/environment.py`:
- Around line 1-10: Update the import in jupyterlite/interrupts.py to use a
relative import instead of the absolute path that references
src.py.mat3ra.notebooks_utils.environment; locate the import statement that
pulls ENVIRONMENT and EnvironmentsEnum (currently using "from
src.py.mat3ra.notebooks_utils.environment import ...") and change it to a
relative import "from ..environment import ENVIRONMENT, EnvironmentsEnum" so the
module is only discovered under a single name by mypy.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/packages.py`:
- Around line 114-116: The list comprehension building
matching_notebook_requirements_list can pass None into re.search because
cfg.get("name") may be missing; update the comprehension to skip entries where
cfg.get("name") is falsy or not a string before calling re.search (i.e., only
call re.search(cfg.get("name"), notebook_name_pattern) when cfg.get("name") is a
non-empty string). Ensure you reference requirements_dict and
notebook_name_pattern and guard the cfg.get("name") value (from each cfg) so
TypeError is avoided.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/ui.py`:
- Around line 31-55: In select_coordination_threshold_emscripten, the else
branch after validating the parsed int contains a break which exits the input
loop and returns the default instead of allowing a retry; remove that break so
the loop continues prompting until a valid coordination number is entered (keep
the ValueError handling as-is so it prints the message and loops), and ensure
coordination_threshold is only set when value is in coordination_numbers before
breaking the loop.

In `@src/py/mat3ra/notebooks_utils/material.py`:
- Around line 108-111: The try/except around building materials causes an
all-or-nothing fallback: if any item in data_from_host raises in
MaterialWithBuildMetadata.create you replace all entries with plain
Material.create losing metadata; instead iterate items and individually try
MaterialWithBuildMetadata.create for each item, falling back to Material.create
only for the failing item so successful MaterialWithBuildMetadata creations keep
their metadata (update the comprehension that builds materials from
data_from_host to handle per-item exceptions referencing
MaterialWithBuildMetadata.create, Material.create and the materials variable).

In `@src/py/mat3ra/notebooks_utils/python/other/api/auth.py`:
- Around line 87-94: The token request data dictionary for the device-code grant
includes non-standard keys; update the data payload used when calling the token
endpoint so it only contains the RFC‑8628 required fields "grant_type",
"device_code", and "client_id" (remove "redirect_uris", "response_types", and
"token_endpoint_auth_method"); locate the dict assigned to data (the block
referencing device_code and client_id) and strip out those extra keys so the
POST to the token endpoint is RFC-compliant.
- Around line 76-100: The polling function _poll_for_token_data currently calls
the blocking requests.post directly; change it to run the blocking HTTP call in
a thread via asyncio.to_thread (e.g., await asyncio.to_thread(lambda:
requests.post(...))) so the event loop isn't blocked, and likewise update the
call to request_device_flow_state inside authenticate_oidc to be invoked with
asyncio.to_thread (await asyncio.to_thread(request_device_flow_state, ...)) so
both blocking network calls are offloaded to the threadpool; keep the existing
parameters, return behavior, and error/timeout handling intact.

In `@src/py/mat3ra/notebooks_utils/python/other/api/job.py`:
- Line 8: Replace the absolute path import of interruptible_polling_loop
(currently "from src.py.mat3ra.notebooks_utils.jupyterlite.interrupts import
interruptible_polling_loop") with a package-relative import so it works when
installed (e.g. use a relative import from job.py up to the notebooks_utils
package and import jupyterlite.interrupts); update the import line in job.py to
use the relative form (from ...jupyterlite.interrupts import
interruptible_polling_loop) so the symbol interruptible_polling_loop is resolved
from the installed package instead of the src/ tree.

In `@src/py/mat3ra/notebooks_utils/python/other/api/material.py`:
- Around line 39-43: The bulk-resolution fallback currently always logs “exact”
even when it falls back to the first result; update the logic around
bulk_material_response (the generator that picks item by owner_id from matches
or falls back to matches[0]) to detect whether an owner match was found (e.g.,
store the generator result in a variable like owner_match before applying the
fallback) and then log a message conditionally: one message when owner_match is
found (exact match) and a different message when fallback is used (fallback to
first result). Apply the same change to the duplicate block around lines 50
where the same selection pattern appears, referencing matches and owner_id.

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_plotly.py`:
- Around line 39-47: Before iterating for go.Scatter, validate that x_values,
y_values and hover_texts have the same length and that trace_names (if provided)
is either the same length or None; if they mismatch, raise a clear ValueError
mentioning the expected length and which input is wrong. In _plotly.py locate
the loop that builds traces (uses x_values, y_values, hover_texts, trace_names
and go.Scatter) and add a pre-check that computes expected_len = len(x_values)
and compares it to len(y_values) and len(hover_texts), and if trace_names is not
empty also compare len(trace_names); on mismatch raise an informative exception
instead of letting an IndexError occur during indexed access.

In `@src/py/mat3ra/notebooks_utils/python/other/ui.py`:
- Around line 19-29: The js_code currently injects content_str and filename
directly into a JS template literal, allowing backticks/`${`/newlines to break
out and enable XSS; update the generation to safely serialize/escape those
Python values (e.g. replace embedding with JSON-safe serialization) so that
js_code uses a safe JS literal like assigning content and filename via
JSON.stringify/serialized Python json.dumps of content_str and filename instead
of raw backticks; update the block that builds js_code (variable js_code) to
reference the sanitized/serialized variables and remove unescaped backtick
interpolation to prevent injection.
- Line 6: The type annotation for download_content_to_file is incorrect: update
the signature to accept the actual types handled (e.g., Union[dict, str, bytes]
or Any) instead of dict so callers and type checkers reflect runtime behavior;
import typing (from typing import Union, Any) and change def
download_content_to_file(content: Union[dict, str, bytes], filename: str) ->
None (or use Any if broader) to match the branches that handle non-dict content
in the function.

In `@src/py/mat3ra/notebooks_utils/settings.py`:
- Line 11: The module's __all__ currently lists only imported names; add the
locally defined names use_interactive_JSON_viewer and UPLOADS_FOLDER to the
__all__ list so they are exported by from settings import *; update the __all__
symbol (in the module where __all__ is defined) to include
"use_interactive_JSON_viewer" and "UPLOADS_FOLDER" alongside "ACCOUNT_ID",
"AUTH_TOKEN", "MATERIALS_PROJECT_API_KEY", and "ORGANIZATION_ID".

In `@src/py/mat3ra/notebooks_utils/ui.py`:
- Line 3: Replace the hard-coded absolute import that references the repository
layout with a package-relative import: change the current import of
select_coordination_threshold_python (the line importing from
src.py.mat3ra.notebooks_utils.python.core.ui) to a relative import within the
package (e.g., from .python.core.ui import select_coordination_threshold_python)
so the module works when the package is installed via pip.

---

Duplicate comments:
In `@src/py/mat3ra/notebooks_utils/jupyterlite/interrupts.py`:
- Around line 164-169: The loop may make remaining_seconds slightly negative
because it always subtracts check_interval_seconds; change the logic in the loop
inside wait for interrupt so you compute sleep_duration =
min(check_interval_seconds, remaining_seconds) (or subtract by that same min)
and then await asyncio.sleep(sleep_duration) and decrement remaining_seconds by
sleep_duration; keep the existing abort check against
broadcast_channel_abort_controller.is_aborted and raise UserAbortError as before
so the loop never produces a negative remaining_seconds value.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/url.py`:
- Around line 48-65: The read_from_url function has an incomplete/incorrect
docstring and can return None if ENVIRONMENT is not handled; update the
docstring to fully describe parameters and return type as Union[str, bytes], and
ensure the function always returns a Union[str, bytes> by adding an explicit
fallback branch that raises a clear exception (or returns a safe default) when
ENVIRONMENT is neither EnvironmentsEnum.PYODIDE nor EnvironmentsEnum.PYTHON;
reference the function name read_from_url and the helpers read_from_url_pyodide,
read_from_url_python and the EnvironmentsEnum values when implementing the fix.
- Around line 7-24: The read_from_url_python function currently calls
urllib.request.urlopen without validating the URL scheme, without a timeout, and
without error handling; update read_from_url_python to first parse and validate
the URL scheme (only allow "http" and "https"), call urlopen with a sensible
timeout (e.g., pass a timeout parameter) when opening the URL, and wrap the
network call in try/except to catch urllib.error.URLError, HTTPError, ValueError
(from invalid URL parsing) and socket.timeout, logging or re-raising a clear
exception; ensure the function still returns bytes when as_bytes is True and
decoded text otherwise, and reference read_from_url_python, the urlopen call,
and the body/response variables when locating the change.
- Around line 27-45: The read_from_url_pyodide function's docstring return type
is wrong and it lacks fetch error handling; update the docstring to state
"Union[str, bytes]" as the return type and add network/error handling around the
pyfetch call in read_from_url_pyodide: after awaiting pyfetch(url) check
response.ok (or equivalent status) and raise a descriptive exception (include
response.status and url) on non-OK responses, and wrap the pyfetch/response
reading in a try/except to catch and re-raise any fetch/IO exceptions with
context so callers get useful error messages.

In `@src/py/mat3ra/notebooks_utils/logger.py`:
- Line 27: The current fallback treats the VERBOSE env var as a truthy string;
change the should_log assignment so the environment value is parsed explicitly
as a boolean before falling back to caller_globals: replace the
caller_globals.get("VERBOSE", os.environ.get("VERBOSE", True)) usage with a
boolean parse of os.environ.get("VERBOSE", "True") (e.g., use a small helper or
distutils.util.strtobool and map to bool, or test lower() in
("1","true","yes","y")) and then call caller_globals.get("VERBOSE",
parsed_bool); update imports or add the helper function and ensure the variable
named should_log uses this parsed boolean.

In `@src/py/mat3ra/notebooks_utils/python/core/json.py`:
- Around line 40-47: The code currently interpolates raw json_str into an
executable <script>, allowing a payload like </script> to break out; instead,
emit the JSON into a non-executable container and read it from DOM before
calling renderjson: create a separate element (for example a <script
type="application/json" id="{id}_data"> or a <div hidden id="{id}_data">) that
contains json_str as textContent, then in the existing script code call
JSON.parse(document.getElementById("{id}_data").textContent) and pass that
resulting object to renderjson (use the existing id and renderjson references),
removing any direct interpolation of json_str into the executable script.

In `@src/py/mat3ra/notebooks_utils/python/other/api/job.py`:
- Around line 26-50: The polling predicate in wait_for_jobs_to_finish_async
incorrectly treats an empty job_ids list as "continue" because it relies on
statuses later; add an early short-circuit: if not job_ids: return False at the
top of wait_for_jobs_to_finish_async so the function stops immediately when
job_ids is empty (use the existing job_ids parameter and return a boolean
consistent with the predicate contract).

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_plotly.py`:
- Around line 146-149: The marker z is being set to the global minimum (z_opt =
np.min(z_matrix)) instead of the z at the selected (x_opt, y_opt); replace that
line with logic that finds the corresponding z value in the grid used to build
z_matrix (e.g., locate the nearest indices by comparing x_opt/y_opt to the x and
y coordinate arrays used to create z_matrix — often named x_vals/x_coords and
y_vals/y_coords or produced by np.meshgrid — then set z_opt = z_matrix[y_idx,
x_idx]; if you need sub-grid accuracy, perform a 2D interpolation of z_matrix at
(x_opt, y_opt) instead). Ensure you reference the same coordinate arrays used
when z_matrix was created so the marker is placed at the correct height.

---

Nitpick comments:
In
`@examples/reproducing_publications/band_structure_for_interface_bilayer_twisted_molybdenum_disulfide.ipynb`:
- Line 208: Remove the duplicate import of visualize_materials: locate the
redundant import statement "from mat3ra.notebooks_utils.other.visualize import
visualize_materials" (the one around line 208) and delete it so only the
original import (the one at line ~112) remains; ensure no other uses or aliases
rely on the removed line and run tests/notebook to confirm imports still
resolve.

In `@other/materials_designer/create_nanowire_custom_shape.ipynb`:
- Line 49: The notebook currently hardcodes the wheel URL inside the
micropip.install call (the string
'https://exabyte-io.github.io/...g1e019a2ab-py3-none-any.whl'); replace this
literal with a parameterized value and load that value at notebook generation or
runtime (e.g., a template variable like WHEEL_URL or a config/env var) or
alternatively implement a small helper that resolves a stable, version-tagged
release or queries a stable endpoint for the latest wheel URL before calling
micropip.install; update the micropip.install invocation in
create_nanowire_custom_shape.ipynb and the same pattern in passivate_slab.ipynb,
import_material_from_jarvis_db_entry.ipynb, and
interface_2d_3d_graphene_silicon_dioxide.ipynb to use the new parameter or
resolution helper (look for the micropip.install call and the hardcoded URL
string to locate changes).

In
`@other/materials_designer/specific_examples/optimization_interface_film_xy_position_graphene_nickel.ipynb`:
- Around line 36-45: The emscripten install block currently hardcodes a wheel
URL with a git-commit hash in the await micropip.install call (the string passed
to micropip.install inside the sys.platform == "emscripten" branch), which will
become stale; change that call to resolve a stable artifact (use a versioned tag
or PyPI package name, or derive the URL from an environment variable like
MAT3RA_UTILS_WHEEL or from a build-time variable) and update any callers such as
await install_packages("specific_examples|made") to rely on that stable name so
the notebook no longer depends on a commit-specific dev wheel URL.

In `@src/py/mat3ra/notebooks_utils/__init__.py`:
- Around line 17-31: The __all__ list is unsorted and triggers RUF022; reorder
the entries in the module-level __all__ to an isort-style alphabetical order
(e.g., alphabetize the exported names such as ACCOUNT_ID, AUTH_TOKEN,
ENDPOINT_ARGS, HOST, MATERIALS_PROJECT_API_KEY, NODE names like PORT/SECURE,
etc.) so that the __all__ list is consistently sorted; locate the __all__
definition in this module and replace its current order with the alphabetically
sorted list of the exported symbols.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/ui.py`:
- Around line 6-28: In ui_prompt_select_array_element_by_index_pyodide, after
assigning result = array[index] you should use that variable in the print
statement for consistency and to avoid redundant indexing; replace
print(f"Selected {element_name}: ", array[index]) with print(f"Selected
{element_name}: ", result) so the function prints the already-computed result
variable instead of re-indexing the array.

In `@src/py/mat3ra/notebooks_utils/material.py`:
- Around line 146-148: The current bare except around
MaterialWithBuildMetadata.create hides errors; change it to catch only expected
exceptions or at minimum capture the exception object and log it before falling
back to Material.create. Replace the broad "except Exception:" with either a
specific exception type you expect from MaterialWithBuildMetadata.create (e.g.,
ValueError/TypeError) or "except Exception as e:" and call the module/class
logger (or process logger) to record e and a contextual message, then call
Material.create(data) as the fallback; reference
MaterialWithBuildMetadata.create and Material.create to locate the code.
- Around line 39-43: Replace the bare except in the loop that calls
MaterialWithBuildMetadata.create so failures aren't silently swallowed: catch
and handle only the expected error types (e.g., ValueError, ValidationError) or
catch Exception but log the exception details before falling back to
Material.create; refer to the loop using globals_dict["materials_in"], the
MaterialWithBuildMetadata.create call, and the fallback Material.create so you
add structured logging (including the exception message/trace) and avoid masking
unexpected bugs.
- Line 37: Replace the verbose presence-and-truthiness check with a .get() call:
change the conditional that currently reads checking "materials_in" in
globals_dict and then indexing globals_dict["materials_in"] to a single
truthiness check using globals_dict.get("materials_in"); update any related
branches that rely on that condition to use the same symbol (globals_dict and
the "materials_in" key) to keep behavior identical.

In `@src/py/mat3ra/notebooks_utils/python/io.py`:
- Around line 35-50: The set_data_python function assumes 'value' is iterable of
dicts with a "name" key and lacks validation; add input checks at the start of
set_data_python to ensure value is an iterable (not a string) and each item is a
mapping that contains a non-empty "name" string, and handle invalid items by
either skipping with a warning via log(...) or raising a clear ValueError; when
iterating, wrap the per-item logic that constructs safe_name and writes to
UPLOADS_FOLDER in a try/except to catch TypeError/KeyError/IOError and log the
offending item and error so errors are actionable.
- Around line 9-33: The function get_data_python lacks a return type and mixes
logging functions; update its signature to include a proper return annotation
(e.g., -> Optional[List[Any]] or -> Optional[List[Dict[str, Any]]]) and import
any required typing names, replace the print("No data found...") in the except
FileNotFoundError block with the same logger used elsewhere (log(...)) and
ensure the except returns None explicitly; keep the rest of the logic and the
use of globals_dict[key] intact.

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_matplotlib.py`:
- Around line 47-83: In plot_series, validate that each item in the series is a
mapping containing both x_key and y_key before building x_labels/y_values;
iterate over series (use the existing parameter names series, x_key, y_key) and
if an item is not a dict/mapping or is missing a key raise a clear ValueError
identifying the missing key and the item index (or normalize/filter with a
logged skip), so that lines building x_labels = [...] and y_values = [...]
cannot raise an unhelpful KeyError.

In `@src/py/mat3ra/notebooks_utils/ui.py`:
- Around line 10-25: The module defines an exported public async function
select_coordination_threshold (which delegates to
select_coordination_threshold_emscripten or
select_coordination_threshold_python) but __all__ only lists
"dataframe_to_html"; add "select_coordination_threshold" to the __all__ list so
the function is explicitly exported as part of the public API, e.g., update the
__all__ assignment to include "select_coordination_threshold" alongside
"dataframe_to_html".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

Comment thread other/materials_designer/workflows/surface_energy.ipynb Outdated
Comment on lines +1 to +10
import os

from .enums import EnvironmentsEnum

# default value for env.HOME from https://pyodide.org/en/stable/usage/api/js-api.html
ENVIRONMENT = EnvironmentsEnum.PYODIDE if os.environ.get("HOME") == "/home/pyodide" else EnvironmentsEnum.PYTHON


def is_pyodide_environment() -> bool:
return ENVIRONMENT == EnvironmentsEnum.PYODIDE
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect mypy/package discovery config and package markers.
fd -HI '^__init__\.py$' src
fd -HI 'pyproject.toml|mypy.ini|setup.cfg|.pre-commit-config.yaml'
rg -n -C3 --hidden 'mypy|explicit_package_bases|mypy_path|python_version|files|packages'

Repository: Exabyte-io/api-examples

Length of output: 50380


🏁 Script executed:

cat -n pyproject.toml | grep -A 30 "tool.mypy"

Repository: Exabyte-io/api-examples

Length of output: 158


🏁 Script executed:

# Check for any CI configuration files that might show mypy errors
fd -HI '.github|.gitlab|Jenkinsfile|tox.ini' -type f
head -100 .github/workflows/*.yml 2>/dev/null || echo "No GitHub workflows found"

Repository: Exabyte-io/api-examples

Length of output: 6740


🏁 Script executed:

# Search for any mypy.ini or setup.cfg with mypy config
find . -maxdepth 2 -type f \( -name "mypy.ini" -o -name ".mypy.ini" \) -exec cat {} \;

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

cat -n .pre-commit-config.yaml | head -100

Repository: Exabyte-io/api-examples

Length of output: 800


🏁 Script executed:

# Check if mypy is referenced in pre-commit config
rg -n 'mypy' .pre-commit-config.yaml -A 5 -B 2

Repository: Exabyte-io/api-examples

Length of output: 318


🏁 Script executed:

# Check if there are any actual error logs or CI outputs showing the duplicate module issue
git log --oneline --all | head -20

Repository: Exabyte-io/api-examples

Length of output: 710


🏁 Script executed:

# Search for imports of environment module to see how it's being imported
rg -n "from.*environment|import.*environment" src/py

Repository: Exabyte-io/api-examples

Length of output: 1916


🏁 Script executed:

# Check if there are any __init__.py issues or if modules are being exposed
cat -n src/py/mat3ra/__init__.py
cat -n src/py/mat3ra/notebooks_utils/__init__.py

Repository: Exabyte-io/api-examples

Length of output: 916


🏁 Script executed:

# Verify the package structure is correct
tree -L 4 src/py/mat3ra/ 2>/dev/null || find src/py/mat3ra -type f -name "*.py" | head -30

Repository: Exabyte-io/api-examples

Length of output: 1011


Convert absolute import to relative import in interrupts.py to resolve mypy duplicate module discovery.

src/py/mat3ra/notebooks_utils/jupyterlite/interrupts.py uses an absolute import path (from src.py.mat3ra.notebooks_utils.environment import ...) that causes mypy to discover the environment module under two different names, blocking the merge. Change this to a relative import to align with all other imports in the codebase:

from ..environment import ENVIRONMENT, EnvironmentsEnum

This resolves the duplicate module discovery without requiring changes to mypy configuration.

🧰 Tools
🪛 GitHub Actions: Continuous Testing and Docs Publication / 1_run-linter (3.10.13).txt

[error] 1-1: mypy failed (exit code 2). Source file found twice under different module names: "mat3ra.notebooks_utils.environment" and "src.py.mat3ra.notebooks_utils.environment".

🪛 GitHub Actions: Continuous Testing and Docs Publication / run-linter (3.10.13)

[error] 1-1: mypy failed (hook id: mypy, exit code: 2). Source file found twice under different module names: "mat3ra.notebooks_utils.environment" and "src.py.mat3ra.notebooks_utils.environment".


[error] 1-1: mypy resolution hint: consider adding __init__.py somewhere, using --explicit-package-bases, or adjusting MYPYPATH (errors prevented further checking).

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/environment.py` around lines 1 - 10, Update the
import in jupyterlite/interrupts.py to use a relative import instead of the
absolute path that references src.py.mat3ra.notebooks_utils.environment; locate
the import statement that pulls ENVIRONMENT and EnvironmentsEnum (currently
using "from src.py.mat3ra.notebooks_utils.environment import ...") and change it
to a relative import "from ..environment import ENVIRONMENT, EnvironmentsEnum"
so the module is only discovered under a single name by mypy.

Comment on lines +114 to +116
matching_notebook_requirements_list = [
cfg for cfg in requirements_dict.get("notebooks", []) if re.search(cfg.get("name"), notebook_name_pattern)
]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

# First, let's read the actual code at the specified location
cat -n src/py/mat3ra/notebooks_utils/jupyterlite/packages.py | sed -n '110,120p'

Repository: Exabyte-io/api-examples

Length of output: 599


🏁 Script executed:

# Get more context around the function
cat -n src/py/mat3ra/notebooks_utils/jupyterlite/packages.py | sed -n '100,130p'

Repository: Exabyte-io/api-examples

Length of output: 1687


🏁 Script executed:

# Search for where requirements_dict is created/populated to understand the data structure
rg "requirements_dict" src/py/mat3ra/notebooks_utils/jupyterlite/packages.py -B 5 -A 5

Repository: Exabyte-io/api-examples

Length of output: 1540


🏁 Script executed:

# Check if there are test cases or examples of how the config is structured
fd test -t f -e py | xargs rg "notebooks_utils" 2>/dev/null | head -20

Repository: Exabyte-io/api-examples

Length of output: 172


🏁 Script executed:

# Look for test files or config examples
find . -name "*.yml" -o -name "*.yaml" | head -20

Repository: Exabyte-io/api-examples

Length of output: 256


🏁 Script executed:

# Search for test files related to packages.py or jupyterlite
fd test -t f -name "*.py" | xargs grep -l "get_packages_list\|notebooks_utils" 2>/dev/null | head -10

Repository: Exabyte-io/api-examples

Length of output: 298


🏁 Script executed:

# Check if there are any example config files
fd -e yml -e yaml | xargs grep -l "notebooks:" 2>/dev/null | head -5

Repository: Exabyte-io/api-examples

Length of output: 78


🏁 Script executed:

# Let's verify Python re.search behavior with None
python3 << 'EOF'
import re
try:
    result = re.search(None, "test_string")
    print("No error - result:", result)
except TypeError as e:
    print("TypeError raised:", e)
EOF

Repository: Exabyte-io/api-examples

Length of output: 135


🏁 Script executed:

# Check the config.yml to see the structure of the "notebooks" section
cat config.yml

Repository: Exabyte-io/api-examples

Length of output: 3516


🏁 Script executed:

# Look for any tests that might test get_packages_list function
find . -type f -name "*.py" -path "*/test*" | xargs grep -l "packages_list\|get_packages" 2>/dev/null

Repository: Exabyte-io/api-examples

Length of output: 51


🏁 Script executed:

# Search for any usage of get_packages_list to see how it's called
rg "get_packages_list" --type py -B 2 -A 2

Repository: Exabyte-io/api-examples

Length of output: 981


🏁 Script executed:

# Let's check if there are any test files that might provide malformed config examples
find . -type f \( -name "*.py" -path "*/test*" \) | xargs grep -l "notebooks_utils\|get_packages" 2>/dev/null

Repository: Exabyte-io/api-examples

Length of output: 114


🏁 Script executed:

# Check for any data files used in tests
find . -path "*/test*" \( -name "*.yml" -o -name "*.yaml" -o -name "*.json" \) 2>/dev/null | head -10

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

# Let's look at where requirements_dict is loaded to see if there's any validation
rg "yaml.safe_load" src/py/mat3ra/notebooks_utils/jupyterlite/packages.py -B 5 -A 10

Repository: Exabyte-io/api-examples

Length of output: 422


🏁 Script executed:

# Check if there's any schema validation for the config
rg "requirements_dict\|config" src/py/mat3ra/notebooks_utils/jupyterlite/packages.py | head -30

Repository: Exabyte-io/api-examples

Length of output: 49


🏁 Script executed:

# Let's see the full function to understand the context better
cat -n src/py/mat3ra/notebooks_utils/jupyterlite/packages.py | sed -n '90,160p'

Repository: Exabyte-io/api-examples

Length of output: 3567


Add defensive check to skip entries without valid notebook name before re.search.

cfg.get("name") can return None (if the field is missing, null, or empty in the config), causing re.search() to raise TypeError at runtime. The current code lacks validation despite accepting arbitrary dictionary input.

Proposed fix
-    matching_notebook_requirements_list = [
-        cfg for cfg in requirements_dict.get("notebooks", []) if re.search(cfg.get("name"), notebook_name_pattern)
-    ]
+    matching_notebook_requirements_list = []
+    for cfg in requirements_dict.get("notebooks", []):
+        pattern = cfg.get("name")
+        if not isinstance(pattern, str) or not pattern:
+            continue
+        if re.search(pattern, notebook_name_pattern):
+            matching_notebook_requirements_list.append(cfg)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
matching_notebook_requirements_list = [
cfg for cfg in requirements_dict.get("notebooks", []) if re.search(cfg.get("name"), notebook_name_pattern)
]
matching_notebook_requirements_list = []
for cfg in requirements_dict.get("notebooks", []):
pattern = cfg.get("name")
if not isinstance(pattern, str) or not pattern:
continue
if re.search(pattern, notebook_name_pattern):
matching_notebook_requirements_list.append(cfg)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/packages.py` around lines 114 -
116, The list comprehension building matching_notebook_requirements_list can
pass None into re.search because cfg.get("name") may be missing; update the
comprehension to skip entries where cfg.get("name") is falsy or not a string
before calling re.search (i.e., only call re.search(cfg.get("name"),
notebook_name_pattern) when cfg.get("name") is a non-empty string). Ensure you
reference requirements_dict and notebook_name_pattern and guard the
cfg.get("name") value (from each cfg) so TypeError is avoided.

Comment on lines +31 to +55
async def select_coordination_threshold_emscripten(distribution: Dict[int, int], default_threshold: int) -> int:
"""
Select the coordination threshold from the given distribution. Works in Pyodide environment.
Args:
distribution: The distribution of coordination numbers.
default_threshold: The default threshold value.
Returns:
int: The selected coordination threshold.
"""
coordination_threshold = default_threshold
coordination_numbers = list(distribution.keys())
prompt_text = f"\nCoordination numbers distribution: {distribution}" f"\nEnter coordination threshold value: "
while True:
try:
value_str = await input(prompt_text) # type: ignore
value = int(value_str)
if value in coordination_numbers:
coordination_threshold = value
break
else:
print(f"Invalid value. Please enter one of these coordination numbers: {coordination_numbers}")
break
except ValueError:
print(f"Please enter a valid integer value from: {coordination_numbers}")
return coordination_threshold
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Remove break in else clause to allow retry.

Line 52 breaks the loop after printing an invalid value message, preventing the user from retrying. The function then returns default_threshold instead of prompting again. This appears to be a logic error.

🐛 Proposed fix
             if value in coordination_numbers:
                 coordination_threshold = value
                 break
             else:
                 print(f"Invalid value. Please enter one of these coordination numbers: {coordination_numbers}")
-                break
         except ValueError:
             print(f"Please enter a valid integer value from: {coordination_numbers}")
     return coordination_threshold
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def select_coordination_threshold_emscripten(distribution: Dict[int, int], default_threshold: int) -> int:
"""
Select the coordination threshold from the given distribution. Works in Pyodide environment.
Args:
distribution: The distribution of coordination numbers.
default_threshold: The default threshold value.
Returns:
int: The selected coordination threshold.
"""
coordination_threshold = default_threshold
coordination_numbers = list(distribution.keys())
prompt_text = f"\nCoordination numbers distribution: {distribution}" f"\nEnter coordination threshold value: "
while True:
try:
value_str = await input(prompt_text) # type: ignore
value = int(value_str)
if value in coordination_numbers:
coordination_threshold = value
break
else:
print(f"Invalid value. Please enter one of these coordination numbers: {coordination_numbers}")
break
except ValueError:
print(f"Please enter a valid integer value from: {coordination_numbers}")
return coordination_threshold
async def select_coordination_threshold_emscripten(distribution: Dict[int, int], default_threshold: int) -> int:
"""
Select the coordination threshold from the given distribution. Works in Pyodide environment.
Args:
distribution: The distribution of coordination numbers.
default_threshold: The default threshold value.
Returns:
int: The selected coordination threshold.
"""
coordination_threshold = default_threshold
coordination_numbers = list(distribution.keys())
prompt_text = f"\nCoordination numbers distribution: {distribution}" f"\nEnter coordination threshold value: "
while True:
try:
value_str = await input(prompt_text) # type: ignore
value = int(value_str)
if value in coordination_numbers:
coordination_threshold = value
break
else:
print(f"Invalid value. Please enter one of these coordination numbers: {coordination_numbers}")
except ValueError:
print(f"Please enter a valid integer value from: {coordination_numbers}")
return coordination_threshold
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/jupyterlite/ui.py` around lines 31 - 55, In
select_coordination_threshold_emscripten, the else branch after validating the
parsed int contains a break which exits the input loop and returns the default
instead of allowing a retry; remove that break so the loop continues prompting
until a valid coordination number is entered (keep the ValueError handling as-is
so it prints the message and loops), and ensure coordination_threshold is only
set when value is in coordination_numbers before breaking the loop.

Comment thread src/py/mat3ra/notebooks_utils/material.py
Comment on lines +39 to +47
for i in range(len(x_values)):
trace = go.Scatter(
x=[x_values[i]],
y=[y_values[i]],
text=[hover_texts[i]],
mode="markers",
hoverinfo="text",
name=trace_names[i] if trace_names else f"Point {i}",
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate parallel input lengths before indexed access.

x_values, y_values, hover_texts, and trace_names can desync and raise IndexError during plotting.

Proposed fix
 def create_scatter_plot_2d(
@@
 ) -> go.Figure:
@@
+    if not (len(x_values) == len(y_values) == len(hover_texts)):
+        raise ValueError("x_values, y_values, and hover_texts must have the same length.")
+    if trace_names is not None and len(trace_names) != len(x_values):
+        raise ValueError("trace_names must have the same length as x_values.")
+
     data = []
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/plot/_plotly.py` around lines 39 -
47, Before iterating for go.Scatter, validate that x_values, y_values and
hover_texts have the same length and that trace_names (if provided) is either
the same length or None; if they mismatch, raise a clear ValueError mentioning
the expected length and which input is wrong. In _plotly.py locate the loop that
builds traces (uses x_values, y_values, hover_texts, trace_names and go.Scatter)
and add a pre-check that computes expected_len = len(x_values) and compares it
to len(y_values) and len(hover_texts), and if trace_names is not empty also
compare len(trace_names); on mismatch raise an informative exception instead of
letting an IndexError occur during indexed access.

from IPython.display import Javascript, display


def download_content_to_file(content: dict, filename: str):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix type annotation to match actual behavior.

The function accepts content: dict but handles non-dict types at lines 16-17. The annotation should reflect this.

🔧 Proposed fix
+from typing import Any
+
-def download_content_to_file(content: dict, filename: str):
+def download_content_to_file(content: Any, filename: str):
     """
     Download content to a file with the given filename.

     Args:
-        content (dict): The content to download.
+        content (Any): The content to download (typically a dict).
         filename (str): The name of the file to download.
     """
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/ui.py` at line 6, The type
annotation for download_content_to_file is incorrect: update the signature to
accept the actual types handled (e.g., Union[dict, str, bytes] or Any) instead
of dict so callers and type checkers reflect runtime behavior; import typing
(from typing import Union, Any) and change def download_content_to_file(content:
Union[dict, str, bytes], filename: str) -> None (or use Any if broader) to match
the branches that handle non-dict content in the function.

Comment on lines +19 to +29
js_code = f"""
var content = `{content_str}`;
var filename = `{filename}`;
var blob = new Blob([content], {{ type: 'application/json' }});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
"""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Escape special characters to prevent JavaScript injection.

Embedding content_str and filename directly into JavaScript template literals is unsafe. If either contains backticks, ${, or newlines, they can break out of the string context and execute arbitrary code. For example, a filename of test`); alert('XSS'); //` would inject script execution.

🛡️ Proposed fix
+    # Escape content and filename for safe JS embedding
+    content_escaped = content_str.replace("\\", "\\\\").replace("`", "\\`").replace("${", "\\${")
+    filename_escaped = filename.replace("\\", "\\\\").replace("`", "\\`").replace("${", "\\${")
+
     js_code = f"""
-    var content = `{content_str}`;
-    var filename = `{filename}`;
+    var content = `{content_escaped}`;
+    var filename = `{filename_escaped}`;
     var blob = new Blob([content], {{ type: 'application/json' }});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
js_code = f"""
var content = `{content_str}`;
var filename = `{filename}`;
var blob = new Blob([content], {{ type: 'application/json' }});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
"""
# Escape content and filename for safe JS embedding
content_escaped = content_str.replace("\\", "\\\\").replace("`", "\\`").replace("${", "\\${")
filename_escaped = filename.replace("\\", "\\\\").replace("`", "\\`").replace("${", "\\${")
js_code = f"""
var content = `{content_escaped}`;
var filename = `{filename_escaped}`;
var blob = new Blob([content], {{ type: 'application/json' }});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
"""
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/python/other/ui.py` around lines 19 - 29, The
js_code currently injects content_str and filename directly into a JS template
literal, allowing backticks/`${`/newlines to break out and enable XSS; update
the generation to safely serialize/escape those Python values (e.g. replace
embedding with JSON-safe serialization) so that js_code uses a safe JS literal
like assigning content and filename via JSON.stringify/serialized Python
json.dumps of content_str and filename instead of raw backticks; update the
block that builds js_code (variable js_code) to reference the
sanitized/serialized variables and remove unescaped backtick interpolation to
prevent injection.


UPLOADS_FOLDER = "uploads"

__all__ = ["ACCOUNT_ID", "AUTH_TOKEN", "MATERIALS_PROJECT_API_KEY", "ORGANIZATION_ID"]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add locally-defined symbols to __all__.

The __all__ export list includes only the imported credentials/IDs but omits use_interactive_JSON_viewer and UPLOADS_FOLDER, which are defined in this module. Users cannot discover these via from settings import *.

📝 Proposed fix
-__all__ = ["ACCOUNT_ID", "AUTH_TOKEN", "MATERIALS_PROJECT_API_KEY", "ORGANIZATION_ID"]
+__all__ = [
+    "ACCOUNT_ID",
+    "AUTH_TOKEN",
+    "MATERIALS_PROJECT_API_KEY",
+    "ORGANIZATION_ID",
+    "use_interactive_JSON_viewer",
+    "UPLOADS_FOLDER",
+]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
__all__ = ["ACCOUNT_ID", "AUTH_TOKEN", "MATERIALS_PROJECT_API_KEY", "ORGANIZATION_ID"]
__all__ = [
"ACCOUNT_ID",
"AUTH_TOKEN",
"MATERIALS_PROJECT_API_KEY",
"ORGANIZATION_ID",
"use_interactive_JSON_viewer",
"UPLOADS_FOLDER",
]
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/settings.py` at line 11, The module's __all__
currently lists only imported names; add the locally defined names
use_interactive_JSON_viewer and UPLOADS_FOLDER to the __all__ list so they are
exported by from settings import *; update the __all__ symbol (in the module
where __all__ is defined) to include "use_interactive_JSON_viewer" and
"UPLOADS_FOLDER" alongside "ACCOUNT_ID", "AUTH_TOKEN",
"MATERIALS_PROJECT_API_KEY", and "ORGANIZATION_ID".

Comment thread src/py/mat3ra/notebooks_utils/ui.py Outdated
@@ -0,0 +1,25 @@
from typing import Dict

from src.py.mat3ra.notebooks_utils.python.core.ui import select_coordination_threshold_python
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fix absolute import path.

The import uses src.py.mat3ra.notebooks_utils... which includes the source directory structure. This will fail when the package is installed via pip.

Use a relative import instead:

📦 Proposed fix
-from src.py.mat3ra.notebooks_utils.python.core.ui import select_coordination_threshold_python
+from .python.core.ui import select_coordination_threshold_python
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from src.py.mat3ra.notebooks_utils.python.core.ui import select_coordination_threshold_python
from .python.core.ui import select_coordination_threshold_python
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/py/mat3ra/notebooks_utils/ui.py` at line 3, Replace the hard-coded
absolute import that references the repository layout with a package-relative
import: change the current import of select_coordination_threshold_python (the
line importing from src.py.mat3ra.notebooks_utils.python.core.ui) to a relative
import within the package (e.g., from .python.core.ui import
select_coordination_threshold_python) so the module works when the package is
installed via pip.

import uuid
from typing import Union

from IPython.display import HTML, display
Copy link
Copy Markdown
Member

@timurbazhirov timurbazhirov May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move this to ui.py and the prompt-related content that’s currently in there into prompt.py instead

@@ -0,0 +1,50 @@
import json
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe inside python we can have code that can operate even in the command line. Then what’s in the “other” now should be “ipython” where we require the ipython framework, not just the python engine. And then, inside jupyterlite we have what it needs.

Python itself should not have ui, only io, json, etc. Then stuff in ipython should be reusable between Jupyterlite and JupyterLab (Ipython), and stuff in JupyterLite is only for it.

Copy link
Copy Markdown
Member Author

@VsevolodX VsevolodX May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about:

 python/
 -- core/
 -- ipython/ (or jl/)

core -- is the pure python

?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants