More of a test design problem. How do we design (and implement) deployments which allow us to test upgrades and compatibility?
Example: how an old LRR instance (without "summary") upgraded to current version affect behavior of archive metadata? Is summary automatically added, or is it simply ignored? How would this behavior affect clients which expect summary?
Obviously, if we can only ever control the LRR source code, we can't expect to truly build a dist from a specific time that uses a specific version of a resource we don't control (e.g. build a docker image from last year where the dep no longer exists on alpine). Or, to address this we'd have to pre-build the required dist(s), and somehow have a way to store them. Or, instead of storing the dists, we just store the data that would be produced by said dist, which would probably be easier.
More of a test design problem. How do we design (and implement) deployments which allow us to test upgrades and compatibility?
Example: how an old LRR instance (without "summary") upgraded to current version affect behavior of archive metadata? Is summary automatically added, or is it simply ignored? How would this behavior affect clients which expect summary?
Obviously, if we can only ever control the LRR source code, we can't expect to truly build a dist from a specific time that uses a specific version of a resource we don't control (e.g. build a docker image from last year where the dep no longer exists on alpine). Or, to address this we'd have to pre-build the required dist(s), and somehow have a way to store them. Or, instead of storing the dists, we just store the data that would be produced by said dist, which would probably be easier.