Notes re ASP.NET MVC 5

Checking out ASP.NET-MVC 5 using Visual Studio 2013 Ultimate on Windows 8 x64, I’m seeing the following issues as of 2014-1-21:

1. Right off the bat – a lot of JavaScript runtime errors, with even just the initial skeleton web application that Visual Studio produces. See screenshot – this is from just launching the blank web application using Internet Explorer.

JavascriptErrors1
Google does not yield answers for this that I can see – although I do see a couple of complaints by other users. Evidently the consensus is that this can be ignored for now. I don’t find that very satisfying.

2. The Areas feature does not work consistently, or at least – not as advertised within their tutorials (Note: the capitalized “Area” indicates the ASP.NET-MVC feature of that name, as opposed to the generic word “area”). I can reach into an Area by typing the full URL into a browser window including a controller name such as “Home”. For example, I added an Area named “LostAndFound.” I can get to that by typing the URL with the suffix “/Home/LostAndFound/”, but not “/LostAndFound/”. ActionLinks from the main area into a particular Area do not work as claimed – you have to include the full namespaces argument to distinguish them. You cannot have a same-named controller class, contrary to the documentation claim. Google does not yield answers for this that I can see. Solution: Suggest avoid using Areas altogether. If you know the solution for this please leave a comment.

3. Compared to the skeleton web application that Visual Studio 2012 produced for MVC 4, the Images folder has been removed, and the CSS styles for unordered-list with the round graphic images for the list-item numbers – is missing. You can fix this by simply copying these over from another web application that you created for MVC 4, both the Images directory and it’s contents, and the CSS styles from Content/Site.css. I am not seeing any comments on the web as to why this was removed. I rather liked it.

4. Ditto for the aside element styling that had been in Content/Site.css I added that back in. Why was it removed?

Posted in Uncategorized | Leave a comment

Visual Studio 2013 is released, with F# 3.1, the Visual F# tools, and improvements to ASP.NET

Visual Studio 2013

Microsoft has released Visual Studio 2013. This brings, amongst other improvements – F# 3.1, improvements to ASP.NET (including ASP.NET-MVC and Web API), and Entity Framework.

Here is the Microsoft download page for Visual Studio.

See here for a writeup on ScottGu’s Blog.

F# 3.1

This new version of the F# language introduces incremental enhancements to the language and tools-platform that further it’s usefulness to the developer. These include named union type fields, extensions to array slicing, an improvement to LINQ-style methods in the form of type-inference, better support for C# extension members, and enhancements in the support of constants in attributes and literal expressions.

See this post on the

Visual Studio F# Team Blog

If you have not yet explored the F# language I would encourage you to check it out. It is a great tool to have in your kit.

Posted in F#, Visual Studio | Tagged , , | Leave a comment

The Minimal-Mental-Model Hypothesis: Identity

What if another person were instantly brought into existence, next to you, who was identical to you in every physical way right down to the molecule? Would that be you, or just a replica of you? Would that person matter just as much as you do? Would you care as much about that person’s welfare exactly as much as your own?

What if..  there were a device that could “beam” you to another location — not by moving your atoms over to that location but by assembling in-place an exact replica, right down to the molecule, and the instant that new entity comes into existence you yourself are disintegrated into oblivion. Sort of like the sci-fi concept from the Star Trek movie series. For the sake of argument, say the new entity was created so precisely perfectly that even the electrochemical activity within your brain was duplicated at that instant in time, such that the memories and thoughts fired up in exactly the same state that your own brain was in.  You could regard this event in two very-different ways: you could say that your body was just moved from one position to the other. Or, you could say that you died, and someone else took your place and carried on.

Another aspect of our consciousness that is evidently at least partly a fiction, is that of identity. This is important for reasons that will become evident as I develop this. It took a little effort to arrive at a working implementation of this aspect of consciousness within the MindTh project.

Image

Imagine that you have before you (‘you’ being our hypothetical animal standing in the jungle) two things. These things matter to you for whatever reason: perhaps they’re something you tend to bump into. So their location in important in your mind. Or perhaps they’re predators — in which case, their location is very important to you. To track them, your mind applies a label to each one. Let’s say the one on the left gets tagged with a post-it note with “A” written on it. The other, with “B”. Now, your brain finds it easier to think about them. One agent within your mind can call out “Watch out! A is moving nearer!”. Or: “Be is asleep. Let’s go nab an egg.”

Database-development engineers are well familiar with this concept. Many databases use tables of information, each table containing many rows of ‘records’, of – for example – books for a bookstore. A central issue is how do you distinguish one record from another? Suppose, for example, you have multiple books (that is, multiple records) that have the same name, and the same author? The designer of the database will often give each record an additional attribute: a identifier. That can be a simple number; a number which is different for every record. This number has absolutely nothing to do with the book; it exists purely for identification purposes.

That’s the concept. A more interesting example within the world of database-design is that of a table of people. You can store all sorts of traits about them, but how do you know for sure that you don’t have two records referring to the same entity? Names can be the same, dates-of-birth, even the address. In the U.S. it is not necessarily a straightforward solution to require a Social-Security Number for every person. Some don’t have one, or perhaps a given record might be for a person who is not within the U.S.

This same issue translates into our interaction with the world around us, and our mind uses an analogous mechanism to deal with it. We create an additional attribute to apply to various objects in order to distinguish them from one another.

Our mind often does this so vehemently (I’m sure a different term fits this better) that we fail to realize it. Given two things that are in front of us: suppose now that both things, are exactly identical. For every relevant property (size, color, texture, everything) they are the same. They might even be identical right down to the atom. But if at some point your mind has occasion to hear one called “A” and the other called “B”, then from that point onward that label (A or B) is pictured in your mind whenever you think of the thing that A or B is associated with. Our minds start to confuse the label “A” with the thing that is being called “A”. Absent any sort of label, your mind will hesitate for a moment. If it must, it’ll simply create a label that says, essentially, “The one on the left” and “..on the right”. If they then switch positions (and you see it) your brain again experiences a moment of confusion as it attempts to realign. But if they’re not quite identical, your brain will quickly zero-in on some feature, any feature, that appears useful to distinguish the two.

Often this is quite useful. Hence: this trait prevails in our mind. But it can lead to some misperceptions if we stick to this conceptualization without having an open mind on certain occasions.

Identity, that concept that our mind mechanically attaches to things we perceive, is largely a fiction. A construct of the mechanisms at work within our brain, implemented for purely pragmatic reasons. It is not ‘of‘ the external world: it is something we layer on top of it.

To take one example where this mechanism illustrates a shortcoming of our limited mental-model: in the most-common present-day religions, often it is a tenet that there is one “god”, and that humans are commanded to worship only that specific god. However (following a train of reasoning that presumes this to be authentic) that god is not identified, other than through the fact that there is only one. No need to apply a name, or some artificial “identifier” bit of information. Sometimes, there are no actual descriptive traits to use to identify this god: no height or weight, no address, no fingerprint or DNA sequence.  However, when you introduce the possibility of other gods – now it becomes important to assign some kind of identifier. Absent that, it becomes nonsensical to talk about “the one god”, if you have no way to identify him or her. Without, for example, a photo of how he (let’s just use the masculine pronoun for the sake of brevity) physically appears, or his Social-Security number, or a name that is truly unique (which would have to be something other than “God” one might presume) – it can be impossible to know which god you’re referring to, and thus it is impossible to know whether you are worshipping the correct one. The salient point is that the mental-model we use to think about this – breaks down.

A friend. Watching you.

A friend. Watching you.

In modeling the mind, you have to step down a bit and not try to over-philosophize this. The visual-system identifies possible “things”, and another mechanism quickly (and subconsciously) puts labels on them. The brains operates as though every thing already has an inherent identity. It depends upon it: the gears hesitate for a moment, otherwise. This is an important clue.

This is related to why the brain is quick to categorize groups, as for example races of people. It is a subconscious instinct, although this does not mean that we humans cannot manage this consciously. When confronted with a new group of people, who seem important to your world in some concrete way — you feel a sense of discomfort, like you’re failing to quite comprehend the world around you — until you give this new group a label. Imagine a mob of strange people come into a room, and an observer nervously asks “who are they?” You respond: “Oh, those. The are just blue-bellies.” Now, that observer is calmed. He perceives that he understands the intruders; his comprehension has managed the event and the source of discomfort is removed. In modeling this behavior in the computer program I am designing – it’s a very useful mechanism.

PrimateThinkingw

There is another, quite bizarre path down which I am presently exploring. It is possible (not possible as in physically, definitely could be the case – but rather just being open-minded to the possibility that it is plausible) – that certain of the things we are aware of as distinctly-labeled entities, are actually not distinct. That is to say: our mind regards them as separate things, but actually they’re just different views of the same thing. This may apply to regions of space, as well as to physical objects. Perhaps they ‘wrap-around’, whereby you look in one direction, and perceive something, and then you look in another direction, see another thing that looks the same as the first – and your mind gives it another label because it seems to be in a different place. But this time, that is a mistake. This might be considered analogous to standing within a hall of mirrors that amusement-parks used to have: you visually see multiple instances of you, or of someone standing near you. Your mind tricks you.

Remember, as Mother Nature brewed up your brain – it only designed it to deal with your immediate environ. Whatever is good enough for us to get by, to survive and procreate – that was enough. That applies obviously only to a very tiny scale of space and time, of speed-of-time, of scale of space, and with further substantial limitations. As we endeavor to broaden our understand of the Universe, we need to contemplate the possibility that our mental-model is a cage from which we will need to free ourselves.

Image of space with many points of light

A universe of many things

Posted in Artificial-Intelligence, consciousness | Tagged , , , , | Leave a comment

Ruby on Rails: a brief evaluation

CatsAreWonderfulI have used a lot of different computer-programming languages over the years. I’ve had some favorites, fussed at a few, produced some decent software with them.

The latest that I have endeavored to use, is Ruby.  More specifically, a website-creation framework known as Ruby-on-Rails.

This language and platform takes the prize:  It has been the greatest black-hole of time, the greatest source of frustration I have ever associated with a programming platform.

Not least amongst the frustrations, is the plethora of people who say they like it. Perhaps that is a phenomenon that results when a technology becomes fashionable for whatever reason – enough so such that everyone using it, is already sitting next to someone else who is well-used to it. I’m only being half-serious of course. I believe it is partly because I come from a systems-level software engineering background, with no small amount of experience with full-bore capable languages like C++, C#, Ada, and Java. I think perhaps I have developed a mental outlook whereby I depend upon the language to be well-defined, explicit and clear. With C#, for example, I can also look up where a given class or method comes from, how it expects to be used, and what it gives me. Always; even if it is poorly commented – I can at least find out how it is defined at the syntactic level.

Not so with Ruby. Variables may not exist until they are assigned to, and it is rarely clear where they come from. My toolset has no ability to jump from usage to definition, as you do in Microsoft’s Visual Studio. The definition of the language itself, is bizarre. And I don’t mean that in a good way. It’s not just a different syntax: the design of the language itself seems disjointed and ugly. What books I have found, what bits of online tutorials — are all leaving out huge portions of the story.

For example:

attr_accessible :variable1, :variable2

What does that do? Does it declare two instance variables? Or does it say that, if those two variables ever did happen to exist, that they would be assignable?

That looks ugly as shit.  Compare that with how you declare three instance-variables in C#:

public int _count = 0;
protected string _name;
private _id;

Here, I have used a convention whereby you can instantly know that something is indeed an instance variable because it starts with an underscore (not a requirement, just one popular convention). Your variables have a type. They always have that type (unless it’s a dynamic variable, but that’s a different story). The best feature of static typing is that if you misspell it somewhere when you attempt to use it, or just neglected to declare it at all – the compiler immediately tells you. What the compiler does not do, is watch you fumble and tinker and fumble and google and get frustrated, while it chuckles smugly.

With C#, you can see where these come from. What type they are. What value they’ll have if not assigned yet. And there is no question of how to refer to them.

With Ruby, how do you use an instance variable? Say you are inside of a class MyClass instance method. Do you use variable1 ?  self.variable1 ?  @variable1 ?  @@variable1? self#variable1 ?  :variable1 ?  or self[:variable1] ?

Yes, I did find some clues to some of those – but not a consistent, clear answer that worked without a lot of trial-and-error.

The syntax is horribly counter-intuitive. If you have an instance variable, denoted by @name, and within a method you assign a value to it but neglect the leading at-symbol, you have instead created an entirely new, local variable. No warning is giving – you simply have a bug that you will hopefully discover in your runtime testing.

You can create programs with Ruby. But I would suggest, Ruby (nor Rails) is not a tool for creating programs that need to actually work reliably.

Very important tip: Use a virtual machine (VM) !

Since I needed to use Linux as the development platform anyway, I created a virtual machine for this purpose. This is a good policy in general, for development. I use VMware Workstation 10, and in this case loaded it with Ubuntu 13.04. I used Sublime Text 2 for the text editor, and Thin for the development web server (Webrick evidently is broken).

I believe one reason Ruby-on-Rails (RoR) became popular was because it does provide some nifty generators for creating a minimally-functional CRUD program based upon some model. That is a good feature, because it jump-starts you with a working (although no-where near a presentable solution of course) website, that you can use as a starting point. You enter a command in the Linux shell, and it makes it for you. That was the only way I could create a program: I ran their scaffold generator to create an initial cut. Several times. Started over. Searched for other tutorials that actually worked (most don’t!!!). Then once I had a program that run without error, I took a Snapshop (VMware’s term for capturing the exact state of the virtual-machine) of my VM at each stage when things did work, so that I could revert back to it. Whenever things became a hopeless dead-end, I took notes, dumped the freak’n thing and reverted back to the most recent (working) Snapshot.

With such a vague, unclear language-definition at the core, how does a system survive? But wait – there’s more!

There are many parts with RoR. The tool-stack seems quite large, and then you must bring in plugins and gems to accomplish anything. With RoR you have to be very conscious of versions. Most parts do not work with the latest versions of the other parts. So you have bundles, or gemsets, or environments, or sandboxes. That is cool, to have the idea of a Gemset file which specifies what to bring into your project, and exactly which version. That, of itself, is more than just a nice feature – it’s essential to do anything. After you have spent weeks of long hours trying to discover what works with what (and which of the many parts simply do not work, never will, and perhaps never did) – the Gemset file saves you by freezing in place your selection of gems and versions of those gems.

That seems to be the essence of RoR.  It is a stack of parts, each and every one of which has it’s own personal project and community of developers behind it. If you are extremely lucky, enough of those parts will work, to arrive at a working solution. The language itself is very terse. And if you like concise definitions – that might please you at first, until you realize it is so unclear and vague that it makes no subjective sense unless you are intimately familiar with this specific platform.

I found that everything I coded, was basically copy-pasting snippets from samples. Which, although occasionally useful, were also the greatest source of frustration. For example, Stackoverflow has a lot of Q&A. Most of the Answers did not work. Or were flat-out typed incorrectly. Not a few — MOST. On that website I do not see how to down-vote an answer, which is a shame because most are clearly shit. That is a whole separate story: why anyone would consider it acceptable to respond to someone’s technical question, by typing up code-snippet without seeing whether it works yet, is beyond me.

Another problem might be in the nature of open-source software. There are too many script-kiddies shoving gobs of code into the project – code of very crappy quality. They often don’t document anything. Or they write incorrect documentation. And it’s frequently not compatible with each other. Perhaps because they have no skin in the game, – they’re throwing it in for free anyway — there’s little incentive to invest time to make it a quality product. There is no software engineering — it’s just gobs of junk code.

That phrase “You get what you pay for” is not always true. But with RoR I believe it does apply. This is not a toolset you want to employ just because it is free. I highly recommend you explore something like ASP.NET-MVC with C# and Microsoft’s Visual Studio: it is finely engineered and you acquire skills with a fine state-of-the-art programming language that handles many roles with aplomb. Or Java, or Python on Django.

With C#, for example, you’ll type more characters on the keyboard. The language is not as concise as Ruby. You have to declare your variables, and decide which data-type they’ll be. But only a fool bases his comparison on that alone. It’s better to type a few more characters, and have a programming construct that is clear and unambiguous, that you can come back to later and maintain and change and make use of, than to have a terse little set of characters that mysteriously works because you spent an obscene amount of time tinkering and googling to get it to.

I have noticed that there are websites claiming to measure the popularity of programming languages, based upon the amount of Google traffic or number of questions on Q&A sites.  I would suggest this is misleading. When I was first learning Java, or C#, I found it very easy to pick up a book, play around with the tutorials, and dig into creating stuff. With a clear syntax, clear semantics, and a little bit of Googling – it wasn’t long before I was quite comfortable with every aspect of the language and could create fairly complex software. With RoR the opposite situation prevails: a massive amount of googling is necessary for virtually every step of the way. I cannot overstate this point — RoR is so unclear and confusing that you will spend an obscene amount of time googling, trying tips and techniques that are incorrect, before making any headway.

So, Ruby-on-Rails:  the worst possible choice for a web-development platform. The scenario where it can work, I would say, is where you already have someone who is already quite familiar with it (preferably more than one, so that they can team up on it). I’m guessing that would probably be someone who starting learning programming with Ruby as their first language, so that they did not start out with a mental outlook that resisted Ruby’s quirky syntax and lack of solid parts. If you’re more than just a small one-project startup — if, for example, you have multiple software products to build or maintain, you may find that your RoR developers are worthless for your other projects – because RoR is all they know. With Java, or C# you have a language and toolstack that you can redeploy for a lot of other needs. This is not an authoritative assessment of course — it’s just an impression from a first experience with it.

Posted in Software Design, Uncategorized | Tagged , , | Leave a comment

Looking through the keyhole

A Minimal-Model Hypothesis of Consciousness

The process of creating a ‘model’ of the mind, encompassing it’s myriad facets of perception, consciousness, emotion, and intelligence (all very ill-defined terms in this context) has awakened me to some rather fascinating revelations concerning ourselves. It matters not whether the subject is human consciousness, or that some other animal species – the mechanisms are, in all relevant aspects, the same.

This is the story of the beginning of a very strange tale..  a vision not so much of ‘how things are’ (meaning the reality of the Universe that is ‘out there’), as it is of how little we can see of it. It follows that coming to a realization of how removed and incomplete our mental understanding is – is quite important for the advances that I want to make.

A photo of the Orion Nebula

An alternative way of looking at consciousness is evolving within my mind. The most remarkable aspect of it? How it impacts how one might regarding the physical world. Re-reading over how I have explained it, I’m struck by how little there is within this that is actually new. There is, at this point, not much that you can pick up off the table and hold forth as groundbreaking or visionary. But as it has brewed in my mind these past few months (ever since Burning Man 2013) – it’s importance has grown.

In the beginning…

(yeah, it has to start like that. Sorry.)

from out of the primordial soup of fertile molecules that gave rise to animate life, that mysterious designer called Evolution (let’s abbreviate that to Ev) needed to create a functional brain. No so much ‘intelligent’, but really just functional. In fact – the term “intelligent” can be misleading. It assumes facts not in evidence. “Functional” is the operative word — in that it functions just enough to accomplish it’s purpose. Functional enough to enable these evolving creatures to survive long enough to bear offspring, and perhaps to hang out a little longer to help nurture them, and that’s it.  Ev didn’t actually care whether the world-view of these evolving creatures had any accuracy. Ev simply needed functioning little brains, to power these creatures through a short life and get the job done. For these little brain-mechanisms to work, they needed a mental-model. That is, a model that described the world around them. Let’s use the term “mental-model”, because that closely describes what it actually is. It’s in the same sense that we use the term “model” in applied mathematics or engineering.

Since a brain is a relatively expensive piece of biological machinery (in terms of metabolic demands upon the rest of the body), Ev had to economize. Ev had to be creative and design a mental-model that is so simple, that this primitive brain could operate it. After all, out of the vast Universe — only the immediate environ need be represented within this model. And when we say “represented” here, it’s not a photographic-image of that environ. It is more accurately described as a simplistic set of rules, an abstract algebra that defines operations upon those sensory inputs that yields useful behavior. Imagine yourself as a primitive salamander-thingy being set loose upon the jungle. What is needed for your brain? What does it have to accomplish? Just to get food, mate or whatever other process serves your species for reproduction, and avoid bumping into trees and shit as you move about. Add a few other details such as avoid water (if you can’t swim) or to use it for escape from predators (if you can), seek shelter, and scratch your butt.

The important part of this story — is the realization of how limited the mental-model held by the brain is, relative to what would be needed to truly model the world around us. Do note that the term “truly” is truly lacking in definition. We are on a track for which words struggle to carry their normal weight.

Ev created our mental-model on a tight budget. She gave us functional parts, but not a real understanding. Everything we can conceive, is just a composition of primitive physical conceptualizations. Objects. Empty space between those objects. Ground. Water. Movement amongst those objects. Other critters. Relationships between critters only insofar as it behooves our family-life. And that’s .. it.  We can do some abstract stuff — and that’s what saves us in a certain way. When you imagine some modern-day thing that didn’t exist in the early eons of our evolution, note how your inner model of that thing, is really very very primitive — often with a thin veneer of abstract-logic overlaid on top of it. For example, note our mental image of the atom: a nucleus with electrons orbiting around it. That’s a simplistic picture that our mind feels comfy with. We graft some additional layers over top of this – orbits being limited to certain definite energy-levels, spin, color.. but if someone says atom what do you envision in your mind? How are these represented within highschool textbooks? We always return to that crude representation. One which, I believe, is absurdly off.

But the limitations go beyond that. Far beyond.

I am theorizing that our conception of the passage of time, of ‘things’ moving along a path, the substance of the space around us — and the distinction between matter and empty-space — all of these are fictions that enable our mental-model to be simpler. And more economical for the brain-mechanism to implement.

Note that I have used intentional language to describe Ev’s process of creating our minds. That’s intentional. It expresses the idea of purpose, in the way that we sometimes ascribe intention to the seemingly random processes of natural selection. But intention and purpose, are il-defined terms, and I use them here loosely. Please do not infer from it that I am naming Ev a god status, or visa versa.

Our conventional world-view of our sight, for example, is simply that of a eye, receiving the light of the outside world, and that light being converted into electrical impulses whereupon they travel down the nerve-bundles to our brain. We “see” the world around us (by this view). This is how that process is depicted within our textbooks.

I suggest a different world-view: that light — and all of the other inputs from our physical senses — from the outside world is filtered and interpreted and changed far, far more severely than we ever imagined. Picture a painting — rich in myriad colors and shapes and motions, far far richer and multi-dimensional than I can convey in words. Now picture within that painting, a tiny circle the size of a pinhead. That circle represents our own mental-model. It is gray, faint, fuzzy, totally distorted beyond all recognition. But it is the only mental-model we have. Inside of that circle, is all that we can conceive. Outside of that circle, is everything else — which does not fit within our mental-model and thus we are incapable of perceiving, or even of conceiving it.

That, is a very severe limitation. The boundary of that tiny little circle represents the boundary of our comprehension. The normal animal brain, bumps into that boundary and stops. That might be a fuzzy boundary, in that some minds venture beyond it just a bit more than the others.

This, then, is why the things that we learn, once we progress beyond the immediate physical world that we evolved within — can seem so absurd to us. The foremost examples that come to my mind include the phenomena associated with quantum physics. Relativity. Electromagnetic energy. The particle-wave duality. And dark matter – which is theorized to make up the majority of the matter of our Universe.

I think it is more likely that it is us that is absurd. Or  rather, our mental-model. All of those afore-mentioned epiphenomena are just the real world.

Here are some corollaries of this view:

Our brains are limited to the tiny-circle, when it is operating as it commonly does – in the normal, sane state. But when the mind is not normal, as for example when hallucinating or crazed in some way — we may perceive some pretty strange shit. The vast majority of the time — that is just junk observation. The drink, the drugs, the medical-condition that gave rise to your crazed state — these yield crap. The neural signals are scrambled and yield only nonsense.  But, in a vanishingly-small portion of those cases, what was perceived was real. Regardless of how crazy it was: if you saw little green dudes hanging upside-down from your ceiling, it is a (vanishingly-small, but non-zero) possibility that you were getting a glimpse of something beyond that tiny circle. What you saw, could have been real. But you sound crazy when you try to describe it to others.

BelugaWhaleAndHerNewborn

It is unfortunate that, in the vast majority of cases, people say nothing but stupid shit when coming back from such an experience. It is unfortunate because we could probably increase our understanding if we could only see what they saw. And if we had some means of discerning the nonsense from the narrow slivers of truth.

There is another, ‘area’, that is inevitably opened by these explorations, which we can usefully bifurcate into two topics:  magic, and religion.

While the explanations for how we came into being, and the origin of the Universe that are offered by the primary Abraham-based religious traditions are of course nonsensical, contemplating the implications of this theory of consciousness has lead me to wonder what else is radically different than what I had been assuming about the world. If all that we comprehend about the Universe is but a tiny dot on a page, then that begs the question: what else is out there? The question of whether there is a master creator, dissolves into smoke at this point for lack of any meaningful definition of intelligence, or intention. There is no way really to articulate what is the distinction between a universe that springs into existence as the result of a Big Bang, or is swept into being by a thought, or an intention, on the part some magical entity (both of which could be postulated to have occurred at the same time in the past, and to have left exactly the same evidence). But just because the fundamental underpinnings of one tradition of fairy-tales is proven false, does not mean that certain of the claims of those fairy-tales are not in fact ‘true’. You can rip apart an antique radio and prove that it contains only glass tubes and bits of metal and plastic; but someone else can then hook a source of voltage to it and show that beautiful music can come from it. Along this same line of thought: perhaps, just because we can demonstrate that all of the machinery of Life can be duplicated by assembling the right molecules into the right physical assemblage of human-being — does not disprove the possibility that there is ‘something’, that persists beyond the lifetime of that body.

I personally would group the reports of religious or spiritual experiences under the category of ‘paranormal’ phenomena. It is just a syntactical choice. If some ‘one’ has indeed floated over their own body during a medical emergency, or after a near-death experience (the salient point being that in this event they were revived such that they could give testimony concerning the event), then – if true – that would provide evidence that there is indeed something about a living thing that persists after death. Something non-material, in the sense of our present understanding of things material. What would you call that? Since I have never seen any definition of the term “soul” that has sufficient specificity to be of any use, that word is as good as any.  It should be borne in mind, however, that (I would suggest) humans have an unfortunate predilection for making things up. The vast majority of religious experiences, of paranormal phenomena, of miracles and other such – is fiction. Only a vanishingly-small proportion corresponds to an actual glimpse outside of the pinhole.

I think that Truth is more difficult to achieve, and more rare, than most of us had imagined.

Frog hidden amongst the algae

I’ll extend this exposition in the form of additional posts, before compiling into a cohesive paper. It would be good to get constructive feedback as this progresses.

Posted in Artificial-Intelligence, consciousness | Tagged , , , , , , | Leave a comment

Western Digital is readying it’s HGST 6-Terrabyte Helium-Filled drives to ship

Western Digital is ready to start shipping it’s latest disc-drives – the 6TB HGST HDDs. These are hermetically sealed with helium, which since it has one-seventh the density (mass per unit volume) of air, does not effect the rotating parts as much. These drives are a bump up for their line which previously maxxed-out at 4TB. These also consume 23-percent less power and run 4-5 degrees Centigrade cooler.

The shift to helium sounds like a sage move. Personally, I had to ponder a moment on that capacity-point. Six Terrabytes.

A little while back (in the eighties), I watched the design-engineers at the firm at which I was working, install a new disc drive into a computer we were going to use to control a bit of hardware. My own computer at this time had only floppy discs, with 360 Kilobytes per floppy. I was impressed by the speed and convenience their new disc drive gave them. No more loading floppies to access files. It seemed so fast – the light on the front would blink a few times and boom! the computer responded.

That was a 5MB disc drive.  Five Megabytes.  You might cogitate upon that for a moment: the new drive that Western Digital is now shipping is not a thousand times greater in capacity. Nope. That would be 5GB.  Let’s see..  these rascals are a million times greater!  Well, a bit more actually since we’re talking about 6TB and not 5TB.

So, let’s see. You could pile up one freakn million of those 5-1/4 inch Seagate disc drives. One million of them. To equal the capacity of this new drive — which by-the-way is substantially smaller and quieter.

Interesting.

Posted in Uncategorized | Leave a comment

Where are the good laptops?

It seems that it is getting hard to find a competent new laptop to purchase. My previous favorite, Lenovo, has eschewed their perfect keyboards and switched to a newer chicklet design. To make matters worse, they’ve abandoned the standard two-row-by-three-column layout of the editing keys (Home, Delete, End, Insert, Page-Up, Page-Down), and the cursor-movement keys are no longer set off in their separate, easily-typed format.

It was already bad enough that the previous 1920-by-1200 pixel resolution displays disappeared. My current laptop, a Thinkpad W500, had that resolution. It is starting to die intermittently, but I am loathe to give up that display for the loathsome 1920-by-1080 that all of the displays top out at now. Scanning the websites for the other laptop makers reveals that even with the few models sporting larger displays, there is nothing with a 1920-by-1200 pixel screen. Very annoying.

The saving grace is that newer, higher-resolution screens are starting to appear — Lenovo has announced their new W540 with a screen resolution of 2,880 by 1,620 pixels!. This would be awesome. But I am not seeing it in any one model that has all of the basic requirements just yet.

Here, then, is a list of what I would want to see in a new laptop:

1. Decent display resolution!!!  At an absolute minimum — 1920-by-1200. No, not all of us buy laptops to show HD moves, and we feel cheated by these 1920-by-1080 screens. We want that extra vertical band of pixels! Better: a 2650 x 1600 resolution display that is at a minimum 15.6 inches. Better yet would be to have a 17 or 18-inch option at this resolution. To be state-of-the-art (does anyone use that term anymore?) — the newest laptop displays such as Apple’s “Retina” display are the standard to beat. Why would a manufacturer expect users to shell out $2.2K+ for a new laptop, for anything less?

The proper layout of a keyboard

The proper layout of a keyboard

2. A keyboard whose design respects the ‘standard’ traditional Thinkpad layout and feel, meaning that the keys have a physical feel that makes it very easy to type, and a layout that matches the traditional PC desktop keyboard (cursor and editing keys are in the right places). This “standard” layout, is what my existing computers have. I have bought some excellent aftermarket keyboards with this exact layout.

The editing keys

Closeup of the page/editing keys

There is finely thought-out ergonomics at work here. Notice the bevelled edge of the keyboard well: when you slide your hand up there in a dark room or as your eyes are focused elsewhere — that edge serves to orient your hand. Your fingers instantly find their way to the Page-Up/Down keys. Through habit, then, your index finger always falls upon the Insert and Delete keys. You don’t have to re-focus your eyes to look at the keyboard to find those – it just happens subconsciously. And see that space between the column that contains the Insert & Delete keys, and the column to the left of them (the Pause and F12)? That little space your fingers will find by habit, to further home in upon their location. The excellent Lenovo Thinkpad W500, has this.  If a manufacturer foolishly messes with this, and changes the laptop’s keyboard — suddenly you’re stuck with having to replace everything and retrain yourself! The key-travel must be as on the W500 or W510, which is most definitely not a ‘chicklet’ keyboard. Professional users do type.Keyboard_W500_3w

3. Multi-touch display. Not essential all of the time — the mouse does fine. But when this laptop is sitting on the family table and you want to just touch it to have something happen – that’s nice to have. Very nice to have. If we’re going to pay North of $2K for a new cptr today, it seems dumb not to have the option of Touch. Think about this: your customer is contemplating pulling the trigger on a new laptop/workstation; he’s mulling it over for a day since it’s a chunk of change ($2500); he walks into a retail shop for pencils and strolls by the counter of laptops. One is showing Windows 8.1′s start screen. He reaches over and softly touches the Music tile, and it instantly leaps to attention, ready to sound out. Sweet. Do you think he’s not going to think about that, when he next goes to sit down and revisit your web-shop page to order?

4. Trackpoint. Okay – I realize most people aren’t used to these and prefer the touchpad. But I and a lot of other Thinkpad users like to have the Trackpoint nib, especially for those moments when the mouse is not at hand.

5. Battery life that exceeds 24 hours. Look, if an iPhone burning 64 bits and quad-cores can last a full workday, why would you expect a user of your W510 to feel happy about the lousy 2 hours it provides?! Do your engineering. Provide the option of a massive spare battery if you must, that connects to the underside. But give us life!

6. The fastest CPU and subsystems — to yield a system as fast and snappy as a desktop. For various reasons — today most laptops feel like you’re in a slow-motion movie even when simply doing the simplest of tasks. This should not be necessary. Be mindful that professional users couldn’t care less about games. So hyped-up graphics subsystems to drive those games aren’t what’s needed. What we want, is the thing to respond to us. Always. Instantly. No matter what.

That is not a long list. There is nothing revolutionary within it – these are the technologies that are available and have already been in production. And this doesn’t include all of the minor, common technologies that have already become common such as multiple USB 3.0 ports and a camera-memory-card reader.

jh

Posted in Personal Effectiveness, Uncategorized | 2 Comments

Mindth

Thought.

A project that has been a sort of a side-task for years, is starting to bear some fruit. Probably because this last month has been one of the rare instances when I had a little bit of time to actually focus on it.

For lack of a better term – I’m calling it Mindth for now.

Human-like thought had been a rather tough nut to crack. It was my feeling that most of the contemporary research directions were off-track, and I’ve been exploring into a different direction. There are myriad practical uses that are in need of something that brings to the table a more intelligent capability.

One area of practical use is that of machine configuration, such as for servers, workstations, and virtual machines (VMs).

The open-source project Muppet is a great example. It targets an area for which there is a great, immediate need — trying to simplify the process of setting up and configuring a server (or workstation, or VM — let’s use the term “box”). With Linux lamentably split into multiple diverging camps, and within each (for example, the Ubuntu family) relying upon multiple disparate ways of installing software, it becomes a bit of a nightmare to try to get things done. Add to this mix Windows, and then VMware or other virtualization environments, and you have a combinational-explosion of routes to explore to encompass all of the possible avenues and methods you need to know, to get even a fairly simple box set up to begin using it. But even Muppet does take a bit of researching to learn how to use it, and to set it up.

For small teams, or the lone worker — there just is not enough time to fiddle and tinker and google and nab books and ask colleagues for every little step of the way. Even for a substantial development or IT team, this complexity is a major obstacle. Certain tasks are easy — to you, or to someone who has spent many months working on this one area of expertise. But for those whose responsibilities span wide areas — taking the time to gain that expertise is just not feasible.

And it is quite unnecessary.

As one of the initial, practical applications of Mindth (outside of the proprietary applications which were done in the past) — I am creating a program with the ability to understand plain-English descriptions of what we want it to do. Specifically, to set up and configure a box, to install the requisite software applications, create VMs, optimize it and apply the known personalizations that the user likes — and to do some troubleshooting of problems. The knowledge upon which it draws, is expressed in plain simple English (I’m focusing on just English for the purposes of this discussion, but it is not limited to this one human language). For brevity I’ll use NL to refer to plain natural language (again, in this instance – that happens to be English).
To clarify: unlike most programming and script languages, with Mindth — the NL is unstructured other than the requirement of being understandable by a reasonably-knowledgeable human being. If a person can understand it, then Mindth should.

Mindth has a facility for comprehending your desires (ie, the goals), and can search what it knows, and what it can access and understand — to try to implement your desires.

KB-Net

One major facility is it’s crowdsourced-knowledge network. For example, within a given organization — you normally have knowledge that applies only to that organization, but which everyone needs to know. For example, the email addresses of various officers and IT specialists. The URLs of the team wiki. The servers that serve up web pages or run tests or hold the databases. Database passwords. Database schema. The location of the codebase. The components your code uses. How to access your version-control. Etc.  It can be a non-trivial amount of work to just get this information – to track down who knows what, and to keep up with changes. To address this need, the Mindth system includes a set of online services that are available only to this organization, that the instance of Mindth running on anyone’s desktop can access. When a server-address changes, that information is put into the Intranet Mindth knowledge-base (a fancy way of saying ‘you tell it to Mindth’) – and that information is transparently pushed down to every Mindth-instance that needs it.

Actually, this crowdsourced-knowledgebase (let’s call this KB-Net, for network-derived knowledge-base) exists at several levels, each of which has a definite scope. The lowest scope is at the level of the solitary developer himself. That information is private to him or her — it serves as a recorder for his notes (I’m going to adopt the convention of just using the masculine gender here, to refer to either gender: no discrimination is intended). The next higher scopes would apply to the organization, which may have multiple levels of scope (immediate team, department, company). And then the widest scope is Internet-connected and applies world-wide, as for example information that denotes how to install a given application.

Implementation

Currently, Mindth is being implemented on Ubuntu and Windows, using the programming-languages C#, F#, Python, and C++ where necessary. The main user-facing application is a desktop GUI that on Windows is composed using WPF. I am using Test-Driven-Development (TDD) for the lowest-level routines, where that makes sense. For exploring major ways of accomplishing things, I keep the TDD on hold, since following that to an excess would seriously slow down progress. Individual functions – those which are well-defined, must work without fail – I simply don’t have time to do extensive debugging with each change, and this is a system that must achieve consistent correctness, thus that is where TDD makes sense: a good balance of experimentation and disciplined process is essential. For the code that implements what we would call “thinking”, or feeling, or perceiving — I need the ability to tinker and experiment and to quickly evolve in new directions.
There are functions within Mindth that are taking a rather long time to run, despite my having re-implemented portions in C++ and exploiting every CPU-thread available. I’m exploring the possibility of taking advantage of the computing power of a high-end GPU card to make this faster.
With regard to programming languages, I personally am feeling a very good relationship with C# — it is turning out to be the best possible overall-systems programming language. F# presents an awesome opportunity to succinctly express some of the pattern-matching and logical-rule-resolution paradigms, and it mates very well with C#. C++ is the original, barebones pedal-to-the-metal (as in, very efficient) programming language; it falls a bit short when you need to express complex data-structures, but for speed and interfacing with electronics it is unsurpassed. Python.. what can you say about Python? At first it feels like a rather odd fellow, but it IS a great tool for tinkering interactively, for gluing things together, and for exploring some of the excellent open-source assets that are available. The toolbox needs all four.

The architecture itself, is a topic for later discussion.

We’ll see where this goes. I am having some fun.

Posted in Software Design | Tagged , | Leave a comment

Test-Driven Development (TDD): To free, or to constrain?

Praying Mantis, looking to put the bite onto your lovely butterfly

Praying Mantis, looking to put the bite onto your lovely butterfly

How many times have you reached for a nice shiny new tool and put it to good use, enjoyed its benefits .. and then after a spell realized that your shiny plaything has walked over to take its place along with all the other myriad factors that weigh you down with complexity and detail?

A: You factor out a bit of common functionality from your project code, and compose a cute little function that does just that. The API into your function is simple, your inputs are well-defined, and the output is deterministically decided by your inputs. You know exactly when it is correct; so you compose a suite of unit-tests to prove its’ correctness and add that to your continuous build/test process. Very nice. Have some coffee.

B: Your management has mandated that you shall be always as one with the Test-Driven-Development (TDD) buzzword. Everything you write, is unfinished until you also write the unit-tests, lots of them, and everything passes. No sweat – TDD has proven benefits, you say. Predictability. Reliance upon correct software. Every sprint (a term from the SCRUM process) has milestones, and tests to prove them. But those tests are no small amount of added work. And with each incremental bit of functionality, many existing tests now fail. So you have to go back and fix those. More late nights of coding. The frig empties of creamer. Oh, by the way — now some of your teammate’s code is failing. Oops. Well – they gotta get on board with the new functionality you just added – so they’re working late too. The show-and-tell is tomorrow. Maybe it will work – who knows? Everyone has been so buried in trying to meet the milestone, that no-one has had a spare moment to sit back and really see whether this project is going in a good direction.

What happened? How did you ever get from A to B? And is your job (and your company) still going to be around tomorrow — will you survive?

Let’s talk about this. and I’ll continue this story next week (check in Monday morning, 4/22).

Posted in Software Design, The Development Process | Tagged | Leave a comment

A Thought on the TDD strategy

Test-Driven-Development (TDD) has a good contribution to make to a project’s management strategy. It is a means by which one can know that a given kit of software (your end-product) does perform according to a specific specification (the suite of unit-tests), and can help to focus a developer’s mental energies toward meeting that specific goal. However, it can also bog your team down in detail if it is not used wisely.

TDD can feel wet and scratchy, but it can be nice if you treat it with understanding.

To illustrate this last point – a project I was on recently used TDD throughout, religiously. We had several disparate groups who met daily for our “scrum” update, and from these meetings it seemed that for any code-change, no matter how small – even just a minor change on one line, one could expect that many days would be needed to get all of the unit-tests to pass.

The problem was that we did not have a real direction to our TDD. Each was told to “write tests” for anything we added or changed. Thus, a developer would wind up looking at his code for any detail that could be “verified”: assert that this flag was true, that flag was false, how many results returned, what strings could be present, etc. Upon adding T-SQL code to add a database view, for example, unit-tests were added to verify the number of lines of T-SQL code, and that it contained such-and-such keyword, etc. Upon adding another bit of SQL, all of those previous unit-tests now fail: every detail has to be accounted for and updated.

A huge amount of time was being wasted. Still is.

It is vital to ask oneself: “What are these tests supposed to achieve?” Your work is to implement functionality within your end product. Do you really care about every detail of how that was done? Do you really need to test for every artifact of the implementation? What if the developer finds a superior way to implement, and achieves the same functionality? Do you really want him to have to re-write a huge battery of unit-tests?

And, if your developer is going through the tests, method-by-method, editing them to get them to pass, are they really serving their true purpose — which is to double-check that the functionality is actually working?

If the one over-riding goal of your software development work, is to produce a product that works (and I hope that it is) – then you really cannot afford to get bogged down by detail. You must move forward, solidly, or perish. No matter how large your team. Even IBM and Microsoft got ground down by excessive code and detail-work. Progress grinds to a standstill, and younger, more agile competitors come to eat your lunch. Software has to evolve, to improve, to become always more solid — and to do this you have to make real, steady forward progress. Not just write a zillion unit-tests for the sake of saying you “use TDD”.

Suggestion: Forge your goals related to TDD, and discuss this with your team leaders. Know how much time is being consumed by writing and running tests (which means – individually tracking this time). And talk about and understand (together) how best to use TDD to meet your goals. Use it where it makes sense, let it go where it does not!

The purpose of software, is to accomplish a specified functionality. Thus your tests should serve the purpose of verifying, to the maximum extent possible, that that functionality is indeed accomplished. But it should do this in the simplest and most concise way, and avoid duplication. Only test for the correct end-result, not the steps to get there. Factor out as much as possible of the infrastructure-code and share it amongst the team. If your API changes, then yes – you can expect a lot of rewriting of tests. But if it is a simple change of code to optimize the implementation, and it necessitates a massive number of test changes — this is a red-flag that you may be getting bogged down in test-itus!

On a different project that I considered to be pretty successful, we were writing low-level code that had to work on myriad platforms — versions of Windows or Unix, 32-versus-64 bit, environments with various components already installed (or not), etc. For this we used virtual machines (VMs) — VMWare in this case. One VM would represent a specific platform: one for Windows XP 32-bit, a bare install, another for Windows 8 beta 64-bit that already had .NET 4.5 installed, etc. One lovely thing about these VMs is that you can deploy them and do a lot of stuff using Windows Powershell, which in turn is easily callable from C# or a product like RobotMind. Products which in turn can be invoked via a right-click on a project within Visual Studio. Thus, instead of spending days setting up for, and running, and checking the results of each out of this plethora of tests, we could just define the whole set up front, right-click on our C# project when done coding, and select “Test this!” — and it would send the compiled code out to the proper test-VM (or set of VMs) on the dedicated test boxes, and deliver back the results (“Go”, or “No-go”). To keep things singing along, I dedicated a box just to running these VMs, one which had a PCIe SSD and oodles of RAM. I could open Remote Desktop on it (RDC) and see at a glance what was running, and what the results were. No manual file-copying, so setting configuration values.

Along with that, I strongly suggest that you look into continuous integration. And to integrate that into your build process, I suggest you carefully consider it in the context of your chosen version-control tool. I have found that you don’t necessarily want to just automatically build everything that gets checked in, when it is checked in. If you do, then everyone is afraid to tinker, or to check in partial results at the end of the day.

Instead, if your version-control tool gives you the ability to attach a “Label” or “Tag” to a given set of check-ins, then you can use that to signal to your continuous-build tool what to check out and build and run tests on. This way, you can merrily check in your work at the end of the day, even if it does not pass tests. If your workstation goes down overnight, or something else happens — your work is safely stored within the code repository. And it does not “break the build” because you did not label it as “Known Good” (or whatever nomenclature you decide to use). Your build server, when it runs nightly or continuously, can simply check out the current branch that is labelled “Known Good” and build it, and run the suite of tests. CruiseControl is probably the most well-known product in this space; I have used it in the past, and it worked well for us. FinalBuilder is another, very powerful product that merits a careful look. Most recently I have grabbed and deployed TeamCity (from JetBrains) and was absolutely delighted at how fast it is to fire up.

In summary, pay heed to your process. Watch out for that trap that ensnares many teams, of getting bogged down trying to meet the needs of the tools, of the processes (like TDD or bug-tracking), and of paperwork. When your developers start to sense that your process is weighing down their productivity (as measured by the actual, real-world functionality that is evolving – the kind that your customers will actually see) then it is time to seriously re-examine your whole process.

jh

Posted in Software Design, The Development Process | Tagged | Leave a comment