Skip to content

[numba.md] Update np.random → Generator API#550

Open
Chihiro2000GitHub wants to merge 1 commit into
mainfrom
update-rng-numba
Open

[numba.md] Update np.random → Generator API#550
Chihiro2000GitHub wants to merge 1 commit into
mainfrom
update-rng-numba

Conversation

@Chihiro2000GitHub
Copy link
Copy Markdown
Collaborator

Summary

This PR migrates legacy NumPy random API usage in numba.md as part of QuantEcon/meta#299.

Details

The Numba/JIT-related classification and changes in this PR follow the guidance at https://manual.quantecon.org/styleguide/code.html#numpy-random-number-generation. I would be grateful if you could refer to the Numba section of this page when reviewing.

Case A (update(), main text, @jit): Left unchanged. Although update() is @jit-decorated, it is called from a prange loop in compute_long_run_median_parallel, making it unsafe to pass a shared Generator. Flagged for reviewer judgment.

Case B (speed_ex1 solution, @jit / no parallel): rng = np.random.default_rng() placed before the @jit definition; signature changed to calculate_pi(rng, n=...); np.random.uniformrng.uniform. Call sites updated. Note: passing a Generator into a @jit function may require Numba to use object mode. I checked that the updated code runs as expected on my side, but I would appreciate reviewer confirmation.

Case C (speed_ex2 solution, plain Python / later JIT-compiled via jit(compute_series)): Same pattern as Case B. Signature changed to compute_series(n, rng); np.random.uniformrng.uniform. Both the pure Python and jit-compiled call sites updated.

Case D (numba_ex3 solution, @jit(parallel=True) + prange): Draws lifted outside the prange loop. rng, u_draws, and v_draws defined before the @jit(parallel=True) definition; signature changed to calculate_pi(u_draws, v_draws); loop length derived from len(u_draws). n = 1_000_000 kept to match the original example size.

Case E (numba_ex4 solution, @jit(parallel=True) + prange): Legacy np.random.randn() calls kept intentionally as a narrow memory-constrained exception. Pre-allocating (M, n) shock arrays with M = 10_000_000 and n = 20 would require approximately 3.2 GB, which is likely beyond what most readers' machines can comfortably accommodate. A short comment has been added inside the loop explaining this.

Hi @mmcky and @HumphreyYang, I'd be grateful if you could take a look when you have time.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

@github-actions github-actions Bot temporarily deployed to pull request May 11, 2026 22:51 Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant