Nice work, btw. Really digging your work.

]]>Equations don’t generally have infinite numbers of variables, but the variables might have infinite numbers of possible values (again, x = y + z is an example). And no, this technique doesn’t work then. The tutorial is mostly just talking about “finite domain” solvers, and “finite domain” here means the set of possible values for the variables (their domains) is finite. And realistically, it also means the sets are small enough to be represented as a bitmask in a machine word.

The tutorial also talks briefly toward the end about how you can handle limited cases of floating-point variables by representing the current possible values for the variable using an interval (upper- and lower-bounds) rather than a bitmask. Our path function system uses that, although only in a very limited way. I’m working now on a system that has a much more general implementation of floating-point variables, although as with all this stuff, things get slow fast if you make the constraint solver work too hard. I ought to have a paper on it in time for the FDG conference.

]]>I’m also not sure why every constraint contains a list of all variables? Is it not better to remove variables that have already been placed from the list or does this not make a difference in performance?

Does such a method work where the list of variables is infinite such as finding solutions to equations?

]]>One could write it differently so that Narrow just updates v to be the set argument, but then in cases like the above, you need to remember always to manually intersect the new mask with the old values. Since doing the intersection is cheap, it makes more sense to do it automatically inside Narrow.

]]>newset = v.values & set

in the function

Narrow(v, set)

now, set = v \in v_i.values where v_i.values = v.values which means that the value of newset is set no matter what the values of the sets, v.values and set. Did you mean XOR? Or do I not understand what’s going on.

Thanks.

]]>