Typically machine learning methods attempt to construct from some limited amount of data a more general model which extends the range of application beyond the available examples. Many methods specifically attempt to be purely data driven, assuming, that everything is contained in the data. On the other hand, there often exists additional abstract knowledge about the system to be modeled, but there is no obvious method how to combine these two domains. We propose the calculus of functional equations as an appropriate language to describe many relations in a way that is more general than a typical parameterized model, but allows to be more specific about the setting than using an universal approximation scheme like neural networks. Symmetries, conservation laws, and concepts like determinism can be expressed this way. Many of these functional equations can be translated into specific network structures and topologies, which will constrain the possible input-output relations of the network to the solution space of the equations. This results in less data that is necessary for training and may lead to more general results, too, that can be derived from the model. As an example, a natural method for inter- or extrapolation of time series is derived, which does not use any fixed interpolation scheme but is automatically constructed from the knowledge/assumption that the data series is generated by an underlying deterministic dynamical system.