Assembler is a perpetual decision machine, propelled by the incessant question “Where do I add the next object?”

At each iteration, it chooses the receiver and the rule corresponding to one of the possible sender candidates.

The possibility to choose one among many objects in a set implies two things:

A simple statement like “choose the heaviest object” requires both: all objects must possess a weight I can measure, and the criteria picks the one with maximum value.

Even the Random selection does it: each object has a numerical index (measurable quantity), and the criteria is to pick one of those indexes according to a pseudo-random number generator.

For a meaningful use of Assembler as a design tool, random doesn’t cut it: randomness (or pseudo-) does not have a stable response (same output to same input and starting conditions). It would not guarantee a stable result under the same starting conditions nor the possibility to adjust or steer in the design space (making controlled variations). In a tool I prefer a stable bias to a random response: you can compensate for a stable bias (like a pilot does for an unbalanced vehicle) or use the bias as a creative advantage; none of these are possible using randomness.

Choice and dimensionality

Each quantity can be also represented as a dimension in an abstract “decision space”. If I am measuring only one quantity (say, temperature), then that space is a single axis, and the measures are like dots along that axis:

Untitled

In some cases, especially with discrete approximation, it is possible to have multiple objects with the same quantity value. One possibility is to write a custom criteria that saves multiple indexes with identical values and chooses at random one candidate when the first filtering criteria yields more than one potential winner. But I just expressed my position on randomness and why it has no place as a design criteria.

The viable alternative in case of multiple candidates is to increase dimensionality: introduce a second quantity that is common to all objects and combine the two (for example: temperature and volume). The “decision space” is now a plane:

Untitled

We have to write a composite criteria for the choice: “first, choose all candidates with minimum temperature, then (if more than one survives the filter) pick the one with minimum volume”. Still, there’s no a-priori guarantee that this will single out one winner, because there can be more than one objects passing our choice filters. We can increase dimensionality again, of course, or, if the criteria is sufficient for our purposes (any of the surviving candidates will do at this point), do not bother adding further dimensions. Still, one has to be chosen.

The point is: it’s a design decision and there’s no standard for it (that’s why there is no “standard” option in Assembler for multi-level selection criteria). The only way (for now) is to script your multi-dimensional criteria that uses the relevant information for your specific case.

Why this obsession for deterministic criteria? The answer is still “stable response”. In an open-ended iterative system (like an assemblage), changes propagate downstream: if I start from the same condition and a small random choice is operating (say, picking one among those who have same temperature and volume) at one iteration, two simulations will start diverging at the first iteration in which this random criteria is called, and grow in potentially very different outcomes. If you play 2 games of chess, start with identical moves until move #20 and then change it in one of them, you will end up with very different games, not games that differ in one move only.