Skip to content
Wojciech Potrzebowski edited this page Apr 24, 2026 · 8 revisions

Presentation from the WG meetings GSC_refactoring.pdf

User Stories (as an outcome of March 17th):

  • As a PhD student / postdoc / experimentalist, I want to start by selecting BioSAS / Soft matter / Magnetic / Other?, so that the UI shows only the inputs and options relevant to me.
  • As a user, I want to load PDB / SLD grids / VTK meshes / magnetic data (and later MD/trajectory formats), so that I can compute scattering without guessing file conventions.
  • As an instrument scientist, I want immediate validation of units, voxel size, coordinate frames, volume/normalization, so that results are reproducible and “wrong-by-default” is avoided.
  • As a user, I want the tool to record what I loaded, what transforms were applied, and what engine/settings were used, so that I can reproduce results and include them in proposals/publications.
  • As an experimentalist (SAXS/SANS), I want a one-click I(q) and/or 2D pattern preview, so that I can understand how my system’s scattering should look.
  • As a SANS/BioSAS user, I want to set solvent SLD/contrast (and ideally hydration/solvation options where relevant), so that the calculated scattering matches physical reality.
  • As a user planning an experiment, I want to perturb parameters (e.g., scale, volume, contrast, structure transform, magnetization) and see how the pattern changes, so that I can assess sensitivity and feasibility.
  • As a simulator / ML user, I want to run parameter sweeps and export curves + metadata, so that I can generate training datasets or explore limits of analytical models.
  • As a user, I want to load experimental data and fit key parameters (scale/background/contrast and selected model params) directly from GSC results, so that the tool supports real analysis—not just preview.
  • As a BioSAS/IDP scientist, I want to compute scattering from an ensemble and optionally average across frames/structures, so that I can generate realistic ensemble patterns.
  • As a simulator (Gromacs/MD), I want to load trajectories (e.g., DCD/MD formats) and compute time-averaged scattering, so that I can compare simulation to experiment.
  • As a power user, I want to export the computed object as a reusable plugin model, so that I can fit it repeatedly and share it with collaborators.
  • As a developer, I want GSC to be a Perspective (not a single monolithic Tool), so that it can support fitting, host multiple workflows cleanly and scale with new engines/modalities.
  • As a developer, I want the UI to show/hide controls based on entry type + modality + engine capabilities, so that users don’t see irrelevant options and we avoid “tabs for everything.”
  • As a developer, I want a stable engine interface (inputs/outputs/capability flags) so that we can integrate new backends (e.g., ShapeSpyer) without rewriting the UI.
  • As a maintainer, I want test cases for bio, soft matter, magnetic, and ensemble modes, so that refactors don’t silently change scientific results.

After feedback:

User stories (draft, updated)

Guiding principle

We should avoid “skins” based on field/community (Bio/Soft/HARD/etc.) as the primary organizing concept. Instead, the UI should be organized by the physics/workflow being addressed, and should dynamically reveal only what is relevant for the chosen entry type + modality + engine capabilities.


A. Core UX:

  1. Filter irrelevant options (physics/workflow-first)
  • As a PhD student / postdoc / experimentalist, I want an easy/intuitive way to get the UI to only show me options relevant to me, without confusing me with lots of inputs/outputs/comments I don’t understand.
  1. Physics-based workflow selection (agnostic start)
  • As a user, I want to start by selecting a physics/workflow (e.g., Polarizable/Magnetic, Anisotropic/Oriented, Isotropic, Ensemble/Trajectory, Parameterized real-space), so that the UI immediately guides me to the right inputs and outputs.
  1. Dynamic UI based on capabilities
  • As a user, I want the UI to show/hide controls based on entry type + modality + engine capabilities, so that I am not overwhelmed with irrelevant options, the GUI is faster to navigate, and I’m less likely to make mistakes.

B. Entry layer: loading and describing the structure/data

  1. Load supported inputs with minimal friction
  • As a user, I want to load any supported structure/scattering input (e.g., PDB, SLD grids, meshes/VTK, magnetic data, and in the future trajectories) with automatic format detection and minimal dialog/guesswork, so that I can focus on science rather than file parsing.
    • NOTE: This includes clarifying and simplifying the nuclear vs magnetic input experience (e.g., one “entry object” with channels/components rather than two separate, confusing file pathways).
  1. Consistency checks and explicit assumptions (not “truth validation”)
  • As a user (including instrument scientists), I want the tool to perform consistency and completeness checks (e.g., units present/consistent, voxel/step size specified, coordinate frame metadata available, required fields not missing) and clearly report any assumptions/defaults it applies, so that results are reproducible and I understand what the calculation actually used.
    • This is not “validating that the file is correct”, but validating that the inputs are internally consistent and sufficient to compute meaningfully.
  1. Absolute-scale, physically realistic scattering setup
  • As a SAS user, I want the scattering calculation to reflect physical reality on an absolute scale (not just “object in vacuum”), so that comparisons to experiment are meaningful.
    • Examples (scope to be prioritized): volume/normalization for PDB-based inputs, solvent SLD/contrast, exchangeable protons, hydration/solvation layers or density perturbations, concentration/number density where needed.

C. Compute & inspect scattering (basic purpose, but with sensible defaults)

  1. Compute scattering with a simple “happy path”
  • As a user, after loading a structure and setting only the minimum required inputs, I want to click Compute and immediately see the resulting scattering (1D and/or 2D as appropriate), so that the tool’s primary purpose is fast and obvious.
  1. Orientation handling that matches experiment
  • As a user with anisotropic samples, I want clear, explicit control over fixed orientation vs appropriate averaging, so that the computed scattering matches the experimental configuration (and we avoid incorrect assumptions for asymmetric particles).

D. Compare to data & fitting (high priority)

  1. Easy compare-to-data
  • As an experimentalist, I want to overlay the computed scattering with experimental data (including consistent q-grids/units), so that I can quickly judge agreement and iterate.
  1. Easy fitting of “nuisance” parameters (no expensive recompute)
  • As a SasView user, I want to fit the computed result to experimental data by adjusting parameters like scale and background (and possibly contrast/volume depending on representation) without forcing a full Debye recomputation, so that fitting is fast and practical.
  1. Fit parameterized real-space models (when available)
  • As a user working with parameterizable real-space builders (e.g., Shape2SAS / Lucas’ editors), I want to fit real-space parameters when the model supports them, understanding that these changes may require recomputing the Debye calculation, so that “real-space parameterization → fit” becomes a supported workflow.
    • This should be treated as a distinct workflow from fitting nuisance parameters.

E. Parameter variation / sweeps (clarified scope)

  1. Parameter variation (interactive)
  • As a user, I want to explore how scattering changes when I vary:
    • global/cheap parameters (scale, background, contrast, orientation settings), and
    • parameterized real-space geometry (when available), so that I can test sensitivity, plan experiments, and understand what matters.
  1. Batch sweeps for feasibility / ML / exploration (optional)
  • As a power user (PI/simulator/ML user), I want to run batch parameter sweeps and export curves + metadata, so that I can generate training datasets, map sensitivity, or demonstrate feasibility in proposals.

F. Ensembles, trajectories, and MD analysis

  1. Ensemble scattering
  • As a BioSAS/IDP user, I want to compute scattering from an ensemble and average appropriately, so that the result reflects realistic distributions rather than a single structure.
  1. Trajectory/MD workflow (future, but planned)
  • As a simulator, I want to compute scattering from trajectories (e.g., DCD/MD/Gromacs-related workflows) and compare to experiment, so that simulation can be evaluated against SAS measurements.

G. Export, reproducibility, and maintainability

  1. History + reproducibility
  • As a user, I want the tool to maintain the history of produced curves/settings (for undo/redo and reproducibility), so that I can retrace how a result was obtained.
  1. Export plugin models for fitting workflows
  • As a SasView user, I want to export a calculation as a plugin model usable in SasView fitting, including appropriate structure-factor coupling, so that I can reuse and share the model.
    • We must differentiate parameters that do not require recomputation (scale/background/possibly contrast/volume) from those that do (real-space geometry/structure changes).
  1. Documented, scalable API
  • As a developer/power user, I want a proper, documented, stable API for the entry/modality/engine layers, so that new engines (and integrations like external packages) can be added without rewriting the UI.
  1. Test coverage
  • As a maintainer, I want appropriate test cases for all functionality (including “golden” scientific validation cases), so that refactoring does not silently change results.
  1. Minimize dependency burden
  • As a maintainer, I would like to minimize the number of new packages that need to be supported.
  1. Respect SasView GUI precedents
  • Most importantly, as a maintainer AND as a SasView user, any new GUI should adhere to existing SasView GUI precedents/philosophies/look-and-feel whenever possible, recognizing this is not a greenfield UI.

Notes / clarifications (capturing open questions)

  • Validation: focus on consistency + completeness + explicit assumptions, not “verifying the user’s file is correct.”
  • Parameter sweeps & fitting: distinguish
    1. cheap/global params (no Debye recompute), vs
    2. real-space parameterized models (Debye recompute required).
  • File formats: clarify which additional formats are truly required vs which should be supported via minimal-dependency pathways.

Draft overarching requirements (work-in-progress)

  • Provide a simple/intuitive mechanism to restrict available options to those useful/understandable for a given user/workflow.
  • Scattering should match physical reality on an absolute scale (needs SLD/volume, solvent/contrast, exchangeable protons, solvent density layers, concentration; include realistic distributions via ensembles/trajectories/size/morphology distributions where appropriate).
  • Maintain history of produced curves (undo/redo + reproducibility).
  • Provide a proper, full, documented and scalable API.
  • All functionality should have appropriate test cases.
  • Allow creation of plugin models for fitting, with clear separation between parameters that do/don’t require Debye recomputation.

Clone this wiki locally