Thursday, March 25, 2010

Change Logic

As part of the reconstruction of the computability and intelligence concept in terms of propositional logic, I am currently working on an effective formalization of (causal) processes. I realized, that this is a goal which is very similar to certain branches of modal logic, but that the approach is very different. Although the work itself is still in a premature state, I thought it would be interesting to work out these differences and explain some core ideas. This resulted in a paper of five pages, that starts with the following preface:


Suppose, temporal logic is the subject that looks for a language and logic to reason about processes and things that change in time. Then it seems, that this implies a thorough study of time itself. But this is wrong. Time is a philosophical burden and dead weight in temporal logic. We shouldn't try to associate events to a time structure, we only need to realize change during the process. This paradigm shift is one starting point of a research project called change logic.


However, the elimination of time from temporal logic may not be so surprising as it tries to sound here. Actually, in the standard modal logical reconstruction, the relation to a time structure via a Kripke model is also only temporary. Once the formal system is motivated and its soundness and completeness is shown, time becomes superfluous here as well and disappears. In fact and in return, it is also possible to attach a linear time structure to what will be introduced as change logic.


So in the end, the real change with change logic does not so much come from a new philosophical semantics, but from the fact, that the whole thing was pulled off without adding new constructs to the syntax. In other words, change logic is temporal logic without modal operators.


The whole text is located here.

Thursday, February 4, 2010

"Literal Mathematics"

There seems to be a real revolution in mathematics on its way, namely the long due formal standardization.

This process is maybe best elaborated in this hierachy of three designs:
  • MathML, now in version 3.0 (December 2009), an XML application with the basic sytax in two modes: Presentation Markup and Content Markup. [http://www.w3.org/TR/MathML]
Sure, in our times there is an inflation of revolutions. But I think, this one is a really big one, despite the unexciting appearance of any norm.

  • A standard will provide us with a common language. A common language for mathematicians is more than a lingua franca or English as the world language. Formal scientist always need to create the world first, before they can talk about it. But at present, there is not even an agreement on whether "natural numbers" start from 0 or 1. This is not freedom, but pure inefficiency.
  • Currently, each constructive idea requires a choice for a concrete programming language when it is implemented. "Higher" languages have an interface concept that abstracts the signature from the details of the implementation. But there are no real standards for the translation between different languages. Each language needs to re-implement all the libraries to be a useful one. Of course, there is XML now, and that is a big step (MathML, OpenMath, and OMDoc are XML, too). But for mathematical structures and theories, XML itself is all too general.
  • This standardization of mathematics is not a new foundation in the meta-mathematical sense. It is a syntactic agreement, not a semantical super-theory. It doesn't explain, what a mathematical object really is, it only defines, how we define one. I suppose, this whole movement comes so late, because it was everything but obvious from the tradition, that a global standard does not need a global ontology. (And maybe in the end, that is a new foundation, after all.)
  • Donald Knuth introduced literal programming as an emphasis on the idea that documents should be comprehensive for humans and computers alike. Many programming languages offer "literal programming" tools, but none of them complies to the promise.
Unfortunately, at present there is still a shortage of tools to efficiently and comfortably work with these new formats.

Tuesday, February 2, 2010

PropLogic

In The propositional logic project, I describe my objections about the common state of this subject as follows:

Propositional logic sure has become a standard notion of our scientifically educated society, maybe as much as the arithmetic systems of integers, rational and real numbers. Boolean operations are standards in many areas of our computerized daily life, digital logic is the mathematical structure behind the computer hardware and information software.


But different to arithmetic systems and other basic data structures like lists, matrices or regular expressions, propositional algebras are not part of the standard tool repertoire of programming languages. This is certainly due to the cost explosion (see e.g. the boolean satisfiability problem) of its default implementation. What we need, is a fast implementation, which allows these structures to be used as basic tools for other programs.


The other problem with propositional logic is its classic algebraization as a (free) boolean algebra, which is only an abstraction of the semantic structure of propositional logic. That way, we loose some of the information. In other words, we need an algebraization that also preserves the syntactic structure of propositional worlds.

I am happy to announce the release of PropLogic, a Haskell package that intents to fix these problems and that might serve as a general and useful tool.


Despite my original intent to write a compact implementation for a pretty compact theory, this distribution is overloaded in an attempt to explain all its aspects. I suppose, the best place to start is A little program for propositional logic and a Brief introduction to PropLogic.


The first of these two tutorials doesn't require prior Haskell knowledge. Any fast implementation of a propositional algebra also provides a fast SAT solver and there is an interest and competition for the quickest solution. I have no idea how my program performs compare to other existing algorithms out there, but I tried to illustrate with some data how good it does the job. (I must admit however, that my "fast" program has its limits, too.)


The thing seems to work properly as it is, but I would still like to do some polishing and upload it to Hackage, soon. It would be very nice to get a boost from the comments and reactions of the Haskell community.