tag:blogger.com,1999:blog-52222320694154482482024-07-15T15:30:35.611-07:00bucephalus.orginteraction site for bucephalus.orgbucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-5222232069415448248.post-60204686877479826992014-07-11T15:26:00.000-07:002014-07-11T15:26:49.680-07:00<h1 id="a-style-rule-for-camelcase">A style rule for CamelCase</h1>
<p>for JavaScript, in particular.</p>
<h2 id="the-context">The context</h2>
<p><code>"bactrian camels have two humps"</code> is an example of a string.</p>
<p>It is common for programming languages that <em>identifiers</em> (labels or names for variables and values) are not just arbitrary strings, but very limited ones. Besides, they are distinct entities with their own syntax due to the omission of the quotation marks.</p>
<p><code>camel27</code> is a typical identifier.</p>
<p>As an immediate consequence, the lack of the quotations marks as delimiter means that white space must now be excluded from identifiers.</p>
<p>The phrase <code>bactrian camels have two humps</code> is not one, but (a sequence of) five identifiers.</p>
<p>One strategy to overcome this limitation is the insertion of strokes: <code>bactrian-camels-have-two-humps</code> is an identifier in say Scheme and <code>bactrian_camels_have_two_humps</code> is a legal identifier in JavaScript.</p>
<p><a href="http://en.wikipedia.org/wiki/CamelCase">CamelCase</a> is another strategy for the conversion of multi-word phrases in a single word: <code>bactrianCamelsHaveTwoHumps</code> is now an identifier for JavaScript. And of course, <code>JavaScript</code> itself is another one.</p>
<h2 id="the-rule">The rule</h2>
<p>I suggest the following rule for the generation of CamelCase identifiers:</p>
<blockquote>
<p>Suppose the descriptor for the identifier in native language has the form</p>
<pre><code>w_1 w_2 w_3 .... w_n</code></pre>
<p>i.e. <code>n</code> words, separated by white space. For example,</p>
<pre><code>the president of USA</code></pre>
<p>Try to avoid one-letter words (such as <code>I</code> or <code>a</code>) in the given phrase.</p>
<p>Now convert the phrase into a CamelCase identifier by doing:</p>
<ol style="list-style-type: decimal">
<li><p>For each of the <code>n</code> words, turn all but the first letter into small ones. So the example phrase now is</p>
<pre><code>the president of Usa</code></pre></li>
<li><p>Capitalize the first letter of the words <code>w_2</code>,..., <code>w_n</code>. So</p>
<pre><code>the President Of Usa</code></pre></li>
<li><p>Remove the white space between the words, so</p>
<pre><code>thePresidentOfUsa</code></pre></li>
</ol>
<p>The result is your identifier.</p>
</blockquote>
<p>Some of the words (but not <code>w_1</code> or two consecutive words) in the initial phrase may be decimals (e.g. <code>4321</code>) and they remain unchanged. For example, the phrase <code>Henry 5 of England</code> would turns into the identifier <code>Henry5OfEngland</code>. In that case, in the word following the decimal, the first letter may remain unchanged, i.e. the <code>of</code> does not have to be changed to <code>Of</code> and the resulting identifier is <code>Henry5ofEngland</code>.</p>
<h2 id="remarks-and-examples">Remarks and examples</h2>
<p>Many identifiers in JavaScript are ugly in the sense that they contain too many consecutive capital letters, and that makes them hard to read and memorize. If the rule would have been applied, things would look a little nicer. Some examples:</p>
<ul>
<li><p><code>JavaScript</code> is a allright, at least as a CamelCase identifier, but <code>ECMAScript</code> is bad. The phrase <code>ECMA script</code> converts to the identifier <code>EcmaScript</code> and this would be a much better writing.</p></li>
<li><p><code>JavaScriptObjectNotation</code> would be allright, but <code>JSON</code> is a bad identifier and should have been <code>Json</code>, instead. <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON"><code>JSON</code></a> is a standard object in ECMAScript 5, and it is not only a badly, but also wrongly chosen identifier: usually only constants are exclusively written with capitals only.</p></li>
<li><p>The whole <a href="https://developer.mozilla.org/en-US/docs/Web/API">DOM</a> is full of bad style identifiers, the most striking example is <code>XMLHttpRequest</code>. Two acronyms in two versions (<code>XML</code> and <code>Http</code>), what a mess. It should have been <code>XmlHttpRequest</code>.</p></li>
</ul>
<p>Another particular challenge is the naming of <a href="http://nodejs.org">Node.js</a> related code, especially since there already is a different <a href="https://developer.mozilla.org/en-US/docs/Web/API/Node"><code>Node</code></a> object in the standard. If we consider the dot as space between words, <code>Node.js</code> is the phrase <code>Node js</code> and that converts into <code>NodeJs</code>, according to our rule.</p>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0tag:blogger.com,1999:blog-5222232069415448248.post-6733923325871896352013-09-04T15:59:00.000-07:002013-09-04T16:04:00.096-07:00The HTML5 Canvas Handbook<p>
This HTML file
<blockquote>
<a href="http://bucephalus.org/text/CanvasHandbook/CanvasHandbook.html">CanvasHandbook.html</a>
</blockquote>
comprises
<blockquote>
<strong>The HTML Canvas Handbook</strong>
</blockquote>
an introduction to the <code><canvas></code> tag and comprehensive reference to the according JavaScript objects.
</p>
<p>
The whole HTML document is presented in a single file.
A print version takes about 80-110 pages.
</p>
<p>
This blog post is meant to be the forum for this text.
Comments and remarks are very welcome!
</p>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com4tag:blogger.com,1999:blog-5222232069415448248.post-57941189514269112392011-01-21T08:52:00.000-08:002011-01-21T09:07:36.539-08:00ElephantMarka simple PHP documentation tool with Markdown markup<br /><p>Sure, (X)HTML is the standard format for documentations, and that is great. This holds even more for people writing programs. But then, writing or reading HTML code directly is a bit awkward and a couple of <a href="http://en.wikipedia.org/wiki/Lightweight_markup_language">lightweight markup languages</a> have emerged to make that easier. In the past, I often used Perls <a href="http://en.wikipedia.org/wiki/Plain_Old_Documentation">POD</a>. But recently, I discovered <a href="http://en.wikipedia.org/wiki/Markdown">Markdown</a>, which is even more convenient and versatile as a general tool.</p><br /><p>I think, Markdown is also very suitable as a format in program comments. For example, wrapping a piece of code ... into code tags <code><code>....</code></code> is achieved by simply placing backticks around it. Putting a block of code inside a <code><pre><code>....</code></pre></code> with converting special characters (&, <, > etc.) into HTML entities is simply done by indenting the lines with 4 spaces or 1 tab. All this is very intuitive and effortless.</p><br /><p>With <strong>ElephantMark</strong>, I now have a tool that puts this idea into practice for PHP. ElephantMark is two things: It states three short and simple rules that turn ordinary PHP comments into Markdown text pieces. Secondly, it is a script <code>elephantmark.php</code>, that actually does this conversion and can be used like <a href="http://www.phpdoc.org">phpdoc</a>.</p><br /><p><strong>ElephantMark</strong> is no competition for standard documentation tools. Professional programmers, building big libraries, will need smarter tools with features like automatic cross references etc. The idea of using Markdown in source code comments is probably more interesting for people, that need to write in many different programming languages and use PHP only occasionally. ElephantMark is understood in five minutes, there is no entire new documentation language to learn.</p><br /><p>In its current version 1, <code>elephantmark.php</code> makes use of the Markdown-to-HTML converter <code>markdown.php</code>, written by <a href="http://michelf.com/projects/php-markdown/">Michel Fortin</a>. This is a great tool itself, thank you very much! Currently, one needs both these scripts for ElephantMark conversions. But maybe, there is a way to merge them into one file for future versions.</p><br /><p>So, here is the script, which is its own manual:<br /><blockquote><br /> <p><a href="http://www.bucephalus.org/ElephantMark/elephantmark.php?php=elephantmark.php&title=ElephantMark&css=mystyle.css">elephantmark.php</a></p><br /></blockquote><br /></p>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com2tag:blogger.com,1999:blog-5222232069415448248.post-29877840598229489842010-03-25T08:18:00.000-07:002010-03-25T08:56:30.266-07:00Change LogicAs part of the reconstruction of the computability and intelligence concept in terms of propositional logic, I am currently working on an effective formalization of (causal) processes. I realized, that this is a goal which is very similar to certain branches of modal logic, but that the approach is very different. Although the work itself is still in a premature state, I thought it would be interesting to work out these differences and explain some core ideas. This resulted in a paper of five pages, that starts with the following preface:<br /><br /><blockquote><br /> Suppose, <em>temporal logic</em> is the subject that looks for a language and logic to reason about processes and things that change in time. Then it seems, that this implies a thorough study of time itself. But this is wrong. Time is a philosophical burden and dead weight in temporal logic. We shouldn't try to associate events to a <em>time</em> structure, we only need to realize <em>change</em> during the process. This paradigm shift is one starting point of a research project called <em>change logic</em>.<br /> <br/><br /> However, the elimination of time from temporal logic may not be so surprising as it tries to sound here. Actually, in the standard modal logical reconstruction, the relation to a time structure via a Kripke model is also only temporary. Once the formal system is motivated and its soundness and completeness is shown, time becomes superfluous here as well and disappears. In fact and in return, it is also possible to attach a linear time structure to what will be introduced as change logic.<br /> <br/><br /> So in the end, the real change with change logic does not so much come from a new philosophical semantics, but from the fact, that the whole thing was pulled off without adding new constructs to the syntax. In other words, change logic is temporal logic without modal operators.<br /></blockquote><br /><br />The whole text is located <a href="http://www.bucephalus.org/text/OpenChangeLogic/OpenChangeLogic.html">here</a>.bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0tag:blogger.com,1999:blog-5222232069415448248.post-15154847384208508192010-02-04T17:58:00.000-08:002010-02-05T04:13:07.946-08:00"Literal Mathematics"There seems to be a real revolution in mathematics on its way, namely the long due<span style="font-weight: bold;"> formal standardization</span>.<br /><br />This process is maybe best elaborated in this hierachy of three designs:<br /><ul><li><span style="font-weight: bold;">MathML</span>, now in version 3.0 (December 2009), an XML application with the basic sytax in two modes: Presentation Markup and Content Markup. [<a href="http://www.w3.org/TR/MathML">http://www.w3.org/TR/MathML</a>]<br /></li></ul><ul><li><span style="font-weight: bold;">OpenMath</span>, a standard for representing the semantics of mathematical objects. [<a href="http://www.openmath.org/">http://www.openmath.org</a>]<br /></li></ul><ul><li><span style="font-weight: bold;">OMDoc</span>, for the formalization of entire documents. [<a href="http://www.omdoc.org/">http://www.omdoc.org</a>]<br /></li></ul>Sure, in our times there is an inflation of revolutions. But I think, this one is a really big one, despite the unexciting appearance of any norm.<br /><br /> <ul><li>A standard will provide us with a common language. A common language for mathematicians is more than a lingua franca or English as the world language. Formal scientist always need to create the world first, before they can talk about it. But at present, there is not even an agreement on whether "natural numbers" start from 0 or 1. This is not freedom, but pure inefficiency.</li></ul> <ul><li>Currently, each constructive idea requires a choice for a concrete programming language when it is implemented. "Higher" languages have an interface concept that abstracts the signature from the details of the implementation. But there are no real standards for the translation between different languages. Each language needs to re-implement all the libraries to be a useful one. Of course, there is XML now, and that is a big step (MathML, OpenMath, and OMDoc are XML, too). But for mathematical structures and theories, XML itself is all too general.</li></ul> <ul><li>This <span style="font-style: italic;">standardization</span> of mathematics is not a new <span style="font-style: italic;">foundation</span> in the meta-mathematical sense. It is a syntactic agreement, not a semantical super-theory. It doesn't explain, what a mathematical object really is, it only defines, how we define one. I suppose, this whole movement comes so late, because it was everything but obvious from the tradition, that a global standard does not need a global ontology. (And maybe in the end, that<span style="font-style: italic;"> is</span> a new foundation, after all.)</li></ul> <ul><li>Donald Knuth introduced<span style="font-style: italic;"> literal programming</span> as an emphasis on the idea that documents should be comprehensive for humans and computers alike. Many programming languages offer "literal programming" tools, but none of them complies to the promise.</li></ul>Unfortunately, at present there is still a shortage of tools to efficiently and comfortably work with these new formats.bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0tag:blogger.com,1999:blog-5222232069415448248.post-22596564023244702552010-02-02T09:31:00.000-08:002010-02-03T07:55:08.189-08:00PropLogicIn <a href="http://www.bucephalus.org/text/PropLogicProject/PropLogicProject.html">The propositional logic project</a>, I describe my objections about the common state of this subject as follows:<br /><blockquote><br /> Propositional logic sure has become a standard notion of our scientifically educated society, maybe as much as the arithmetic systems of integers, rational and real numbers. Boolean operations are standards in many areas of our computerized daily life, digital logic is the mathematical structure behind the computer hardware and information software.<br /><br/><br />But different to arithmetic systems and other basic data structures like lists, matrices or regular expressions, propositional algebras are not part of the standard tool repertoire of programming languages. This is certainly due to the cost explosion (see e.g. the boolean satisfiability problem) of its default implementation. What we need, is a fast implementation, which allows these structures to be used as basic tools for other programs.<br /><br/><br />The other problem with propositional logic is its classic algebraization as a (free) boolean algebra, which is only an abstraction of the semantic structure of propositional logic. That way, we loose some of the information. In other words, we need an algebraization that also preserves the syntactic structure of propositional worlds.<br /></blockquote><br />I am happy to announce the release of <a href="http://www.bucephalus.org/PropLogic">PropLogic</a>, a Haskell package that intents to fix these problems and that might serve as a general and useful tool.<br /><br/><br />Despite my original intent to write a compact implementation for a pretty compact theory, this distribution is overloaded in an attempt to explain all its aspects. I suppose, the best place to start is <a href="http://www.bucephalus.org/text/ExecPropLogic/ExecPropLogic.html">A little program for propositional logic</a> and a <a href="http://www.bucephalus.org/text/IntroToPropLogic/IntroToPropLogic.html">Brief introduction to PropLogic</a>.<br /><br/><br />The first of these two tutorials doesn't require prior Haskell knowledge. Any fast implementation of a propositional algebra also provides a fast <a href="http://en.wikipedia.org/wiki/Boolean_satisfiability_problem">SAT solver</a> and there is an interest and competition for the quickest solution. I have no idea how my program performs compare to other existing algorithms out there, but I tried to illustrate with some data how good it does the job. (I must admit however, that my "fast" program has its limits, too.)<br /><br/><br />The thing seems to work properly as it is, but I would still like to do some polishing and upload it to <a href="http://hackage.haskell.org/">Hackage</a>, soon. It would be very nice to get a boost from the comments and reactions of the Haskell community.bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com23tag:blogger.com,1999:blog-5222232069415448248.post-48998701266936587662009-11-06T13:45:00.000-08:002010-02-03T06:15:20.837-08:00A fast SAT solver<p>A decade ago, I developed a system for propositional logic, based on <em>Prime Normal Forms</em>. The main function takes an arbitrary propositional formula φ and returns its <em>prime conjunctive normal form</em> <tt>pcnf</tt>(φ). Implicitly, this algorithm solves the <em>SAT problem</em>, i.e. it provides a general decision method for the question, if a given formula φ is satisfiable or not, namely:<br /> φ is satisfiable iff <tt>pcnf</tt>(φ) is not (the normal form of) <tt>0</tt>, i.e. "false".</p><br /><p>Obviously, the SAT problem is one of the hot issues in computer science and there is a demand for a fast algorithm. However, it has never been the focus of my own research project, which is rather dealing with a re-interpretation of modern logic. I just needed a system that provided me with the functionality of propositional logic and at that time, I didn't know of any available one. I suggested however, that the solution I found would satisfy a general demand and that my approach tackled some very deep insights into the matter. I published the mathematical theory in a paper. I also wrote a Java applet that works like an online pocket calculator for propositional logic and accompanied it with a couple of tutorials and introductions for all kinds of users.</p><br /><h2>Sketch of the method</h2><br /><p>In my publications I rather use the dual as the default, i.e. I consider Prime Disjunctive Normal Forms, the function is <tt>pdnf</tt> instead of <tt>pcnf</tt>, and the <q>satisfiability problem</q> becomes the <q>validity problem</q>. The algorithm for the <tt>pdnf</tt> function is not stochastic or heuristic in nature, it is a strictly deterministic and algebraic procedure. I'll try to sketch its basic features, but let me recall some (more or less) standard terminology and well-known facts, first:<br /><ul><br /><li>A <strong>literal</strong> λ is either an <strong>atom</strong>ic or a negated atomic formula, i.e. α or ¬α.</li><br /><li>A <strong>normal literal conjunction</strong> or <strong>NLC</strong> γ is a conjunction of literals [λ<sub>1</sub> ∧ ... ∧ λ<sub>k</sub>] so that the atoms α<sub>1</sub>, ..., α<sub>k</sub> occuring in these literals are strict linearly ordered, according to some given linear order relation < on the chosen set of atoms. Each λ<sub>i</sub> is a <strong>component</strong> of γ.</li><br /><li>A <strong>disjunctive normal form</strong> or <strong>DNF</strong> Δ is a disjunction of NLC's [γ<sub>1</sub>∨...∨γ<sub>n</sub>]. Each γ<sub>i</sub> is a <strong>component</strong> of Δ. We all know, that each formula φ has an equivalent DNF Δ, written φ⇔Δ.</li><br /><li>Given a NLC γ=[λ<sub>1</sub> ∧ ... ∧ λ<sub>k</sub>] and a DNF Δ=[γ<sub>1</sub>∨...∨γ<sub>n</sub>]. We say that<br /><ul><br /><li>γ is a <strong>factor</strong> of Δ, if γ <strong>implies</strong> (or is <strong>subvalent</strong> to) Δ, written γ⇒Δ.</li><br /><li>γ is a <strong>prime factor</strong> of Δ, if it is a factor and none of its components λ<sub>1</sub>, ..., λ<sub>k</sub> could be deleted without violating the subvalence γ⇒Δ.</li><br /></ul><br /></li><br /><li>A DNF Δ=[γ<sub>1</sub>∨...∨γ<sub>n</sub>] is called a<br /><ul><br /><li><strong>prime DNF</strong> or <strong>PDNF</strong>, if the set of its components {γ<sub>1</sub>, ..., γ<sub>n</sub>} is exactly the set of all its prime factors.</li><br /><li><strong>minimal DNF</strong> or <strong>MDNF</strong>, if there is no other equivalent DNF which is smaller in size. (The <strong>size</strong> of a DNF is the number of components and atom occurrences.)</li><br /></ul><br /></li><br /><li>Every propositional formula φ has an equivalent PDNF. This PDNF is unique (up to the order of its components). So the function <tt>pdnf</tt> that returns the equivalent PDNF <tt>pdnf</tt>(φ) for every given φ is a well-defined <em>canonization</em> of propositional logic.</li><br /><li>Every φ also has an equivalent MDNF. But this MDNF is not unique in general. Is is however always a subset of the PDNF in the sense that each component of the MDNF must be a component of the PDNF.</li><br /></ul></p><br /><p>Our goal is an implementation of the <tt>pdfn</tt> function, i.e. the construction of an equivalent PNDF Δ for each given formula φ. The real core of this function is the <strong>P-Procedure</strong>, which takes an arbitrary DNF Δ and returns the equivalent PDNF <tt>P-Procedure</tt>(Δ). A classical method to implement the P-Procedure is the <em>Quine-McCluskey method</em>. But that algorithm grows exponentially and is not feasible for other than small input DNF's. We need something else and we start with the idea of pairwise component minimalization and call this the <strong>M-Procedure</strong>:<br /><ol><br /><li>We take two components γ<sub>L</sub> and γ<sub>R</sub> of the given Δ and replace it by the components of, i.e. the MDNF [μ<sub>1</sub>∨...∨μ<sub>m</sub>] of [γ<sub>L</sub>∨γ<sub>R</sub>]. Obviously, m is either 1 or 2, so this step can only decrease the size of Δ.</li><br /><li>We repeat the first step until no more changes can be applied.</li><br /></ol><br />The resulting DNF, denoted by <tt>M-Procedure</tt>(Δ), is what we call a <strong>pairwise minimal DNF</strong> or <strong>M2DNF</strong>, i.e. a DNF where each pair of components make a minimal DNF. It is easy to proof that<br /><ul><br /> <li> each PDNF is a M2DNF, and </li><br /> <li> each MDNF is a M2DNF.</li><br /></ul><br />But none of these two facts holds the other way round. <tt>M-Procedure</tt>(Δ) is neither the prime nor a minimal form of Δ, at least not in general. The M-Procedure is not a realization of the P-Procedure (hence the two different names). But it will serve us well in a proper implementation of the P-Procedure...</p><br /><p>I suppose, that most people who spent some time and concentration on the SAT problem have tried this approach of an <q>M-Procedure</q>. It is not a trivial matter to understand why this has to fail. The notion of <q>prime</q> in propositional logic is probably motivated by the according concept in number theory. But a closer investigation of things reveals a surprising and fundamental difference between prime factors in propositional formulas and integers. This problem, but also its solution, stems from the analysis of binary DNF's [γ<sub>L</sub>∨γ<sub>R</sub>].</p><br /><p>For every two NLC's γ<sub>L</sub> and γ<sub>R</sub> we write<br /><ul><br /><li><tt>min</tt>(γ<sub>L</sub>,γ<sub>R</sub>) for the MDNF of [γ<sub>L</sub>∨γ<sub>R</sub>], and</li><br /><li><tt>prim</tt>(γ<sub>L</sub>,γ<sub>R</sub>) for the PDNF of [γ<sub>L</sub>∨γ<sub>R</sub>]</li><br /></ul><br />These functions <tt>min</tt> and <tt>prim</tt> have straight-forward implementations (of linear complexity) and they are not hard to explain. What is actually an interesting and crucial point here is the fact that <br /><ul><br /><li><tt>min</tt>(γ<sub>L</sub>,γ<sub>R</sub>) is made of either one or two components, as mentioned earlier,</li><br /><li><tt>prim</tt>(γ<sub>L</sub>,γ<sub>R</sub>) is often the same as <tt>min</tt>(γ<sub>L</sub>,γ<sub>R</sub>), but there is also a situation where <tt>min</tt>(γ<sub>L</sub>,γ<sub>R</sub>)=[γ<sub>L</sub>∨γ<sub>R</sub>] and <tt>prim</tt>(γ<sub>L</sub>,γ<sub>R</sub>)=[γ<sub>L</sub>∨γ<sub>R</sub>∨γ<sub>c</sub>] is a 3-component DNF. For example, consider<br /><blockquote><br /> <tt>prim</tt>([<tt>A</tt> ∧ <tt>B</tt>], [¬<tt>B</tt> ∧ <tt>C</tt>]) = [[<tt>A</tt> ∧ <tt>B</tt>] ∨ [¬<tt>B</tt> ∧ <tt>C</tt>] ∨ [<tt>A</tt> ∧ <tt>C</tt>]]<br /></blockquote> <br />This third and new γ<sub>c</sub> is what we call the <strong>c-prime</strong>.</li><br /></ul><br />Now we are able to implement the P-Procedure:<br /><pre><br />Algorithm P-Procedure(Δ)<br />begin<br /> Δ' := M-Procedure(Δ) ;<br /> repeat<br /> (1.) Δ'' := Δ' ;<br /> (2.) let Π be the set of all c-primes of component pairs<br /> in Δ' ;<br /> (3.) attach all the components of Π to Δ' ;<br /> (4.) Δ' := M-Procedure(Δ') ;<br /> until Δ' and Δ'' contain the same set of components ;<br /> return Δ' ;<br />end.<br /></pre><br />The proof for the correctness of this P-Procedure is based on a deep result of what I called <em>Completeness Theorem</em>, saying that a DNF is a PDNF iff it is a <q>c-complete M2DNF</q>.</p><br /><p>For its computational complexity holds: If n is the number of different atoms in Δ, then the P-Procedure needs no more than n <tt>repeat</tt> loops. This, together with the fact that the M-Procedure is of polynomial complexity, let me suggest that the P-Procedure is of polynomial complexity as well. And that, of course, would have been a suprising answer to the open <em>P=NP problem</em>. When I realized that, I spent some time to find evidence for or against my conjecture, but I was only able to deliver some lemmata and <q>partial proofs</q>, but no definite decision.</p><br /><h2>Links</h2><br /><p>All mentioned material is available on <a href="http://www.bucephalus.org">www.bucephalus.org</a>. In particular:<br /><ul><br /><li>The paper called <a href="http://www.bucephalus.org/text/PNFCanon/PNFCanon.html">Theory and implementation of efficient canonical systems for sentential calculus, based on Prime Normal Forms</a> is the primary source of the approach I sketched here.</li><br /><li>The Java applet is called <a href="http://www.bucephalus.org/bucanon/BucanonAppletOnline.html">bucanon</a> and there is also a <a href="http://www.bucephalus.org/text/BucanonGuide/BucanonGuide.html">guide</a> to the literature and tutorials.</li><br /><li>There is a Haskell package about to be published soon, that provides a full system for propositional logic with both "default" and "fast" implementations. The fast ones are indeed based on the mentioned normalizations. [added in February 2010: <a href="http://www.bucephalus.org/PropLogic/">PropLogic</a> is now available]</li><br /></ul><br /></p>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com1tag:blogger.com,1999:blog-5222232069415448248.post-76536124106172213592009-10-12T18:48:00.000-07:002009-11-06T14:41:43.100-08:00My new communist card game<p>The <a href="http://www.bucephalus.org" title="www.bucephalus.org">homepage</a> has undergone a complete makeover. Design is not really a goal in the first place, information is more important than aesthetics. However, dissatisfied with widespread features of mainstream design patterns, this latest version has some rather unconventional features:<br /><ul><br /><li>There is one index page (the start/welcome page) and many single pages.</li><br /><li> Each single page concentrates on its subject. It only has a link to the index page by default, instead of carrying a whole menu and framework around.</li><br /><li>The index page is<br /><ul><br /><li> comprehensive: it shows all entire tables of content, there is no need to browse to further pages,</li><br /><li> compact: achieved by putting long content tables into scrollable cards,</li><br /><li> communistic: every item is an equal card in the whole game.</li><br /></ul></li><br /><li>The latest fashion of many blogging frameworks (including this present one) by fixing the width of the page content destroys the advantages of HTML over the print formats (like PDF etc.), namely that it efficiently nestels into the browser window, be it a tiny smart phone or a huge cinema display. In particular, the cards that make the index page are supposed to nicely distribute inside the window.</li><br /></ul><br />Hopefully it works and you like it.</p>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0tag:blogger.com,1999:blog-5222232069415448248.post-68121043732485342022009-10-01T05:45:00.000-07:002009-10-01T06:12:43.171-07:00Half a tutorial on the Haskell number systemDear nice Haskell people out there!<br /><br />Thank you for your friendly and numerous reactions on my number system picture on different web locations. It seems, that many people feel the same pain when it comes to numbers in Haskell. Even so brilliant introductions like the <a href="http://book.realworldhaskell.org">Real World Haskell</a> seem to capitulate with this idiosyncratic complexity and rather sum up the facts. As I said, I gave up on it as well. But your reactions are itching.<br /><br />So please, allow me to show you at least the existing half of my <a href="http://www.bucephalus.org/text/Haskell98numbers/tutorial-draft.pdf">tutorial</a>. The missing part is the actual reconstruction of the type classes. At some point, I tried to combine that part of Haskell with a reconstruction of the mathematical evolution from natural, to integer, ... to complex numbers. Here is <a href="http://www.bucephalus.org/text/Haskell98numbers/math-evolution-draft.pdf">a glimpse</a> of what I had in mind. I thought, that many programmers could need this kind of update, which is necessary knowledge if one really wants to understand the logic behind the type class zoo. <br /><br />Originally, this tutorial started off as just a section of an introduction to Haskell itself, some kind of "Haskell for mathematicians", with the ambition of being "the first truely functional introduction to this functional language". What I missed in all the classic texts is a pure conceptional or semantic approach to the matter. For example, they explain "if..then..else.." as a language construct that needs proper alignment etc etc. But "in fact" (i.e. in a functional brain), this is a function of type "(Bool,a,a)->a" (that accidentally happens to have a non-default syntax). In other words, instead of forcing people to learn the language first, before they can decide if they want to think that way, I thought I could start with the philosophy right away before going into the formal details. I cut out the <a href="http://www.bucephalus.org/text/Haskell98numbers/intro-draft.pdf">part of the original introduction</a> that attempts to sketch the Haskell universe the way I try to approach it.bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0tag:blogger.com,1999:blog-5222232069415448248.post-66348570922666469292009-09-29T09:50:00.000-07:002009-09-29T10:28:32.222-07:00Haskell number system in one pictureAs part of an introduction into the number systems of Haskell 98, I drew an overview in an attempt to capture the chaos. Later on, I gave up on the text, but the picture might still be a useful reference for some programmers. I think, that all mathematical functions are present, for the string conversions (in particular the Show and Read type class and the converters from the Numeric module), I drew a different picture.<br /><br />Note, that the following PNG image became quite distorted when it was generated from the LaTeX source. But the proper version is available as a <a href="http://www.bucephalus.org/text/Haskell98numbers/Haskell98numbers.pdf">PDF </a>or the <a href="http://www.bucephalus.org/text/Haskell98numbers/Haskell98numbers.ps">PostScript</a> file.<br /><br /><p><br /><img width="400" src="http://www.bucephalus.org/text/Haskell98numbers/Haskell98numbers.png" /><br /></p><br /><span style="text-decoration: underline;"></span>bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com3tag:blogger.com,1999:blog-5222232069415448248.post-82068252636984069392009-09-26T12:38:00.000-07:002009-09-26T12:50:24.031-07:00WelcomeAfter having designed several PHP versions to increase the interactivity of <a href="http://www.bucephalus.org">bucephalus.org</a>, I decided to use the existing infrastructure of <a href="http://www.blogger.com">blogger.com</a>. Thank you.bucephalushttp://www.blogger.com/profile/10322402600537850032noreply@blogger.com0