.NET Core 2.0 Is Now Released

The .NET platform continues to move forward and evolve, and now it’s usefulness is extending into more target-platforms. As of 2017 August 14th, .NET Core 2.0 is available as a final release. Now, you can develop against this as your primary target version of .NET and expect to run your software on Windows, MacOS, and Linux (see  here  for a list of supported OS versions).

You can download the bits and view installation instructions  here .  Note that Docker images are available too.

You will want to update your version of Microsoft Visual Studio to  Version 15.3  also.  This update brings you a significant set of fixes and incremental improvements.

See  the full announcement here.

If you are developing websites using ASP.NET-MVC, you can access the GitHub repo for the new Core version  here.

James W. Hurst

Advertisements
Posted in Uncategorized | Leave a comment

Security: Are you securing your WiFi Router?

Ok, this is an alarming finding:

Most people never change their router’s passowrd

Continue reading

Posted in Security, Uncategorized | Leave a comment

AR and VR Have Some Surprising Potential Uses For New Development

When I first started hearing about Virtual Reality (VR), my first reaction was “oh great — more toys for the gaming boys”. Which was a foolish dismissal of something that can bring great utility to some unexpected areas.

I love to machine metal (I’m not a professional in it), and there is a visceral joy about making things out of steel and aluminum that’s hard to understand for anyone who hasn’t smelled burning cutting-oil and had to vacuum sharp chips of steel out of your shirt pocket. Machine-shop skills also help to round-out an engineer’s sense of maker capability.  But it can be laborious: just trying to zero-out your cutting-bit on a Bridgeport to establish your X-Y reference, so that you can make your cuts to .001″ accuracy — can force you to expend many minutes of detail-work.

Now imagine this: You put on some awkward-feeling goggles, step up to your machine with the blank piece of steel already snuggly clamped-in, and say: “Touch the bit to the right and front edges to establish a zero-reference”. The software controlling your milling-machine uses AI-vision (a term that I use for Artificial Intelligence-based robotic vision) and AI-speech, and a general knowledge of machining and of *this* specific machine, to know to power up, touch your workpiece to establish the point of reference (using a touch-sensor or else just high-resolution AI-vision), and *ask* you if it finds any ambiguity in your request.

But.. it is possible (indeed likely) that it will completely misunderstand what you are wanting. It might take it’s reading off of the wrong edge. Or, what if your work-piece has multiple stepped edges, or some other non-simple shape? This is not a miscommunication, so much as it is a *misunderstanding*. And we can attend to it thusly..

Imagine now that over the right-edge of your work-piece, an image appears of a thing shaped like a pointer, which floats over to the actual edge and seems to touch it, and magical marker-lines appear that just barely touch the corner, leaving zero doubt as to what they are indicating) as a voice asks “Is this what you want to use as your X-zero reference?”

You: “No. The edge just below that.”

The point moves again.

You: “Yes, right there.”

Now the sensor is physically moved there to take it’s reading and then we repeat this process for the Y-reference.

Dental_Lab_Metal_Milling

Note that the conversation is instantly clarified — it is placed upon a firmer foundation, simple by pointing to things. This, is Augmented Reality (AR) — so named because you still see the actual world from within your headset. It simply adds things to it.

And the effect, and utility of it, can be fantastic.

Note that this application I describe entails AI (also my specific area of focus, both academically and in practice). My personal passion (that’s the buzzword today, right? “passion”. Really I prefer just to say “this stuff is fun”). This is most useful and fun, when your AI comprehends what’s going on, and applies the questions and the visual-augmentations only where and when needed.

Now imagine that you’re cutting your workpiece. The AI system accessed the blueprint you gave it (downloaded it from your cloud-based data facility) when you said “Get the blueprint for the widget that I drew recently and cut this metal to that shape.” and begins cutting, all the while showing a virtual DRO (Digital Read-Out) with imaginary LEDs flashing the X,Y,Z positions in realtime.

Or it might say: “Your workpiece is too narrow along the Y axis for this item.”

Or it might make a hand or different pointer appear, pointing to the speed-changer levers and say: “You will need to set the speed to 1200 rpm.”  or remind you to “Move your tools clear of the cutting-area and ask that person beside you to please don his safety-glasses.”

If there is an object or piece of the machine that would block it’s motion — an A.R. red-flag appears and points to it: “This will obstruct my movement.”

Helping to show you what next to do on this machine (a milling-machine can be quite a complex kit of machinery), or to set your references, guide the cutting, change bits, etc. — these are all radical improvements to this process.

There is one essential point here:
Your incorporation of Augmented-Reality, in conjunction with careful application of AI and good UX-design, can radically transform a complex task into something simpler, faster, and safer. The improvement in productivity can be substantial.

The utility of this could be akin to having a human looking over your shoulder, who has a perfect pointer, who never gets distracted and never fails to identify a safety-situation.

I chose one specific application for AR here because it stands-out as a great need, in my experience — but it is not hard to imagine many other uses. Here I have touched upon some basic augmentations of the user-experience: pointing to things, confirmation of intent, giving a progress-report in realtime,  safety, instruction (again, using pointers of various types), and helping the AI-speech conversation via grounding it with actual illustration. If you want to get started with AR/VR you have several vendors to draw upon: Facebook provides the Oculus headset and SDK, and there is also Alphabet Inc., Samsung Electronics Co. and Sony Corp have or are preparing products for this field. I’ll be writing some how-to articles on this topic shortly.

I believe there is substantial fruit here, to be harvested by the visionaries who can see the possibilities and put it into action.

#augmentedreality #virtualreality

VRStockImage2

 

 

Posted in Artificial-Intelligence, Augmented-Reality | Tagged | Leave a comment

Simplifying your Software Design: Logging

The LogNut open-source project on GitHub is a facility that I created to do one thing: to simplify this one aspect of software-design. Logging.

NutWithMagnifier

In some of your best programming-environments you can type your code, make it RUN — and watch what it does. But not always. Sometimes you add logging statements to your code just to have some way to see what is happening. What you don’t want, is for that logging facility itself to be adding to the complexity of your endeavor.

I prefer my tools to be spartan and simple. I want them to just work. Always, and reliably. Without a lot of prep, reading, and thought — “just flip’n do it and don’t get in my way!”

This discussion is about the C# version of LogNut, on the Microsoft Windows platform, but the general design aspects apply to it’s other versions (F#, ASP.NET-MVC, Xamarin.Forms, Java, Swift, C++).

With a .NET application, the very minimum that you have to do to use a library (called as ‘assembly’ in the .NET world) is to reference it, and then call it’s API. Most logging frameworks have multiple other things that you must reference, configuration-files to prepare, documentation to read in order to figure out where those config-files must be located and where your output will go, etc.

I’ve eliminated the need for most of that. You can still do it (control the output-location, make use of config-files, perform extensive configuration of every detail) but you don’t *have* to. For example, this is a perfectly acceptable bit of code to output the text “Hey – this code is working” into a log file (a simple text file):

using Hurst.LogNut;
...
LogManager.LogInfo("Hey - this code is working");

And that is it. Where does it go? Since you did not specify, it goes into the one safe place that is always there for you — your ‘My Documents’ folder, into a folder called “Logs”. Since you also did not specify the name of the file, it is named after the name of your program that you are running, with “_Log.txt” appended so that you can open it by simply double-clicking on that file just like any other text file.

This is all clearly spelled-out in the library’s introductory documentation, it’s sample-code, and within the API itself.

What if you specify some other place to write the log to, but that place is not there (such as a flash-drive that has been removed)? It ‘fails-over’ and writes it to that same, fall-back location (‘My Documents\Logs’) but with “_REDIRECTED” inserted into the filename.

LogNut is a full-power logging library; you can totally do full-feature stuff such as:

var myLogger = LogManager.GetLogger("Message-Q");
myLogger.LogTrace("This is normal");
myLogger.LogDebug("Why is this executing?");
myLogger.LogError("Uh oh!");

and your exception-handling blocks can be accomplished with one simple line of code…

}
catch (Exception x)
{
MyLogger.LogException(x);
}

If you’re starting a new software-project in C# or any other of the supported platforms/languages and are contemplating what to select for your logging library, check it out:

https://github.com/JamesWHurst/LogNut

Posted in C#, Software Design, The Development Process | Tagged , | Leave a comment

A thought about the evolution of our UX designs

In the beginning we had widgets.  Buttons, textboxes, checkboxes, and static (not-interactive) text, on our user-interfaces (which I refer to here as “UX”).

That was in the days of purely desktop applications, running on Apple Macs, Windows, and Linux/Unix. As our display-screens gained higher resolution, our widgets became more detailed and visually obvious. Yeah, sometimes it was worthless eye-candy that only cluttered the screen. But often it was graphic design that served to help us — it made the widgets stand out so to help us identify what they were and how to use them. Every available action provided a button of some sort. And buttons were a metaphor for something physical — a BUTTON, and these could have a visual rendering that reminded us of that, such as 3D shapes, lighting and shading, textures, and a subtle animation that happened when you clicked on it (or touched it with your finger, where touch-sensitive surfaces were used). The original Apple iPhone and iPod seemed to be elevating this aspect to a new art-form. I loved that.

Then something happened. Buttons began to be replaced with links, some bit of text that you could click on. And those were not always rendered such that it was obvious. Touch-screens and mobile devices began to change the landscape, and Microsoft came out with their “Metro” look on Windows Phone and Windows 8. I liked the Windows Phone, but that Metro-ness caused me some major angst on Windows. On the tiny smartphone it made sense, and the layout was nice and efficient. On the desktop screen – it made no sense. Why have a 5K display-surface just to show a few solid blocks of color that force themselves over the entire screen?  Then Apple followed along, ironically wasting the fine work they’d done on their UX designs. 3D buttons evolved into simple rectangles of color, or just links. Sometimes, those links are so tiny and indistinguishable from the rest of the (often very tiny) screen, that they became hard to use.

Well, I’m unhappy. Snarl. Hiss. Boo.

I’ve a smartphone with a very high-resolution display surface, an impressive amount of computing power, and a very hard-to-use software design. WTF? !!!

Someone, has forgotten (or never learned!) the basic principles of UX design: make it simple, intuitive, and obvious.

Every major action should have a button (or icon, functioning as a button). And buttons should be obvious, and cover all of the common workflow-paths that the user may want to accomplish, from where she may want to do it. If you are looking at text messages, where is the “delete this text-message” button? Is it within that “all other actions” icon that is stuck up in the corner? Nope! Argh!!

This devolution in our UX-design has major consequences. I have missed crucial appointments because when I entered it into my Calendar, some field that escaped my notice was somehow not correctly set. Sometimes someone calls me and I look at my phone and cannot tell how to answer this call. And I still cannot reliably get into my own voicemail.

Yeah, after many decades of being a designer, I guess in some respects I am old-school. I respond to that accusation with the admonition that some basic rules of engineering do not, and should not be assumed to, change.

Posted in Software Design, The Development Process | Tagged | Leave a comment

Multi-threaded Programming Techniques, versus Message-Queuing

This is just a quick draft that I’ll expand later tonight…

To keep this simple, I’ll use “thread” here in a sense that is synonymous with “process” or “task”, in that it’s simply a concurrent path of execution, using whatever ready mechanism is provided by your chose programming-language.

A lot of developers, even those with substantial experience, often turn to “multi-threaded” methods in their designs to handle the scaling-out of their program’s workloads. I submit that this ought to be approached in a more circumspect way.

Allocating your program’s work to more threads, can be counter-productive. I’ll explain with a simplistic example..

Let’s say your program is to serve users who log on and make requests. You have already dedicated threads to such major areas as the UI and writing a log. Now, when each user comes on and needs the program to serve his requests, you spawn a new thread that is dedicated to that user, and retire it afterward. Let’s also assume that you have already optimized this somewhat by using a thread-pool, to avoid the overhead of creating and destroying thread-objects for each user-logon.

The problem, here, is that no matter which CPU your program is running on, you only have but a few cores, and thus but a few threads, that can run concurrently. Your server might be running an Intel i7 that has 4 cores and 8 threads available (at most). What will happen if 300 users suddenly want to logon and make requests? 1-3 of them may get served fairly quickly, and then the rest are frozen, their processing-threads blocked while awaiting the others to complete. If these block awaiting access to low-level resources such as the disk or database, you may have race-conditions, or just a massive logjam that fails completely.

There are some tasks that deserve to have a thread dedicated to them, which simplifies your design, and those should be few in number.

For those tasks that are truly variable, and can scale up dynamically from zero to any huge number — you need a different approach.

Consider the message-queuing pattern. Here, you actually dedicate just one thread to serving all of those users, and yet it may run far more efficiently and much faster. Consider…

If the system places the user-requests onto a queue, in the form of discrete messages, and your service thread then runs asynchronously taking those messages *from* that queue and serving them one-by-one, you can very effectively decouple the system from this user-servicing job.

Within the context of this message-servicing task, your program is running synchronously, serving exactly one message at a time. By running each user-request synchronously, you can now protect your system against race-conditions far more easily. It serves the message, writes to disk or database, etc. and completes it — and proceeds on to serve the next message. No thread-spinup or spindown overhead, no blocking, and that one thread should be now free to run full-bore.

The key is that the number of threads now does not balloon up in response to a load. It might still fail to serve a massive number of users, but at least it won’t freeze-up and crash your system (ideally). It will just run as fast as it can, falling behind perhaps — but then catching up as soon as it efficiently can.

There are several excellent message-queuing components available on the open-source marketplace, and most of those are intended for communication across networks. But you can implement this strategy even within a given program, with all components on the same box — with a fairly simple design. In fact, I believe that often this is a far simpler way to go about it than most other approaches.

Please let me know your thoughts. Thank you.

James W. Hurst

Posted in Uncategorized | Leave a comment

Notes re ASP.NET MVC 5

Checking out ASP.NET-MVC 5 using Visual Studio 2013 Ultimate on Windows 8 x64, I’m seeing the following issues as of 2014-1-21:

1. Right off the bat – a lot of JavaScript runtime errors, with even just the initial skeleton web application that Visual Studio produces. See screenshot – this is from just launching the blank web application using Internet Explorer.

JavascriptErrors1
Google does not yield answers for this that I can see – although I do see a couple of complaints by other users. Evidently the consensus is that this can be ignored for now. I don’t find that very satisfying.

2. The Areas feature does not work consistently, or at least – not as advertised within their tutorials (Note: the capitalized “Area” indicates the ASP.NET-MVC feature of that name, as opposed to the generic word “area”). I can reach into an Area by typing the full URL into a browser window including a controller name such as “Home”. For example, I added an Area named “LostAndFound.” I can get to that by typing the URL with the suffix “/Home/LostAndFound/”, but not “/LostAndFound/”. ActionLinks from the main area into a particular Area do not work as claimed – you have to include the full namespaces argument to distinguish them. You cannot have a same-named controller class, contrary to the documentation claim. Google does not yield answers for this that I can see. Solution: Suggest avoid using Areas altogether. If you know the solution for this please leave a comment.

3. Compared to the skeleton web application that Visual Studio 2012 produced for MVC 4, the Images folder has been removed, and the CSS styles for unordered-list with the round graphic images for the list-item numbers – is missing. You can fix this by simply copying these over from another web application that you created for MVC 4, both the Images directory and it’s contents, and the CSS styles from Content/Site.css. I am not seeing any comments on the web as to why this was removed. I rather liked it.

4. Ditto for the aside element styling that had been in Content/Site.css I added that back in. Why was it removed?

Posted in Uncategorized | Leave a comment

Visual Studio 2013 is released, with F# 3.1, the Visual F# tools, and improvements to ASP.NET

Visual Studio 2013

Microsoft has released Visual Studio 2013. This brings, amongst other improvements – F# 3.1, improvements to ASP.NET (including ASP.NET-MVC and Web API), and Entity Framework.

Here is the Microsoft download page for Visual Studio.

See here for a writeup on ScottGu’s Blog.

F# 3.1

This new version of the F# language introduces incremental enhancements to the language and tools-platform that further it’s usefulness to the developer. These include named union type fields, extensions to array slicing, an improvement to LINQ-style methods in the form of type-inference, better support for C# extension members, and enhancements in the support of constants in attributes and literal expressions.

See this post on the

Visual Studio F# Team Blog

If you have not yet explored the F# language I would encourage you to check it out. It is a great tool to have in your kit.

Posted in F#, Visual Studio | Tagged , , | Leave a comment

The Minimal-Mental-Model Hypothesis: Identity

What if another person were instantly brought into existence, next to you, who was identical to you in every physical way right down to the molecule? Would that be you, or just a replica of you? Would that person matter just as much as you do? Would you care as much about that person’s welfare exactly as much as your own?

What if..  there were a device that could “beam” you to another location — not by moving your atoms over to that location but by assembling in-place an exact replica, right down to the molecule, and the instant that new entity comes into existence you yourself are disintegrated into oblivion. Sort of like the sci-fi concept from the Star Trek movie series. For the sake of argument, say the new entity was created so precisely perfectly that even the electrochemical activity within your brain was duplicated at that instant in time, such that the memories and thoughts fired up in exactly the same state that your own brain was in.  You could regard this event in two very-different ways: you could say that your body was just moved from one position to the other. Or, you could say that you died, and someone else took your place and carried on.

Another aspect of our consciousness that is evidently at least partly a fiction, is that of identity. This is important for reasons that will become evident as I develop this. It took a little effort to arrive at a working implementation of this aspect of consciousness within the MindTh project.

Image

Imagine that you have before you (‘you’ being our hypothetical animal standing in the jungle) two things. These things matter to you for whatever reason: perhaps they’re something you tend to bump into. So their location in important in your mind. Or perhaps they’re predators — in which case, their location is very important to you. To track them, your mind applies a label to each one. Let’s say the one on the left gets tagged with a post-it note with “A” written on it. The other, with “B”. Now, your brain finds it easier to think about them. One agent within your mind can call out “Watch out! A is moving nearer!”. Or: “Be is asleep. Let’s go nab an egg.”

Database-development engineers are well familiar with this concept. Many databases use tables of information, each table containing many rows of ‘records’, of – for example – books for a bookstore. A central issue is how do you distinguish one record from another? Suppose, for example, you have multiple books (that is, multiple records) that have the same name, and the same author? The designer of the database will often give each record an additional attribute: a identifier. That can be a simple number; a number which is different for every record. This number has absolutely nothing to do with the book; it exists purely for identification purposes.

That’s the concept. A more interesting example within the world of database-design is that of a table of people. You can store all sorts of traits about them, but how do you know for sure that you don’t have two records referring to the same entity? Names can be the same, dates-of-birth, even the address. In the U.S. it is not necessarily a straightforward solution to require a Social-Security Number for every person. Some don’t have one, or perhaps a given record might be for a person who is not within the U.S.

This same issue translates into our interaction with the world around us, and our mind uses an analogous mechanism to deal with it. We create an additional attribute to apply to various objects in order to distinguish them from one another.

Our mind often does this so vehemently (I’m sure a different term fits this better) that we fail to realize it. Given two things that are in front of us: suppose now that both things, are exactly identical. For every relevant property (size, color, texture, everything) they are the same. They might even be identical right down to the atom. But if at some point your mind has occasion to hear one called “A” and the other called “B”, then from that point onward that label (A or B) is pictured in your mind whenever you think of the thing that A or B is associated with. Our minds start to confuse the label “A” with the thing that is being called “A”. Absent any sort of label, your mind will hesitate for a moment. If it must, it’ll simply create a label that says, essentially, “The one on the left” and “..on the right”. If they then switch positions (and you see it) your brain again experiences a moment of confusion as it attempts to realign. But if they’re not quite identical, your brain will quickly zero-in on some feature, any feature, that appears useful to distinguish the two.

Often this is quite useful. Hence: this trait prevails in our mind. But it can lead to some misperceptions if we stick to this conceptualization without having an open mind on certain occasions.

Identity, that concept that our mind mechanically attaches to things we perceive, is largely a fiction. A construct of the mechanisms at work within our brain, implemented for purely pragmatic reasons. It is not ‘of‘ the external world: it is something we layer on top of it.

To take one example where this mechanism illustrates a shortcoming of our limited mental-model: in the most-common present-day religions, often it is a tenet that there is one “god”, and that humans are commanded to worship only that specific god. However (following a train of reasoning that presumes this to be authentic) that god is not identified, other than through the fact that there is only one. No need to apply a name, or some artificial “identifier” bit of information. Sometimes, there are no actual descriptive traits to use to identify this god: no height or weight, no address, no fingerprint or DNA sequence.  However, when you introduce the possibility of other gods – now it becomes important to assign some kind of identifier. Absent that, it becomes nonsensical to talk about “the one god”, if you have no way to identify him or her. Without, for example, a photo of how he (let’s just use the masculine pronoun for the sake of brevity) physically appears, or his Social-Security number, or a name that is truly unique (which would have to be something other than “God” one might presume) – it can be impossible to know which god you’re referring to, and thus it is impossible to know whether you are worshipping the correct one. The salient point is that the mental-model we use to think about this – breaks down.

A friend. Watching you.

A friend. Watching you.

In modeling the mind, you have to step down a bit and not try to over-philosophize this. The visual-system identifies possible “things”, and another mechanism quickly (and subconsciously) puts labels on them. The brains operates as though every thing already has an inherent identity. It depends upon it: the gears hesitate for a moment, otherwise. This is an important clue.

This is related to why the brain is quick to categorize groups, as for example races of people. It is a subconscious instinct, although this does not mean that we humans cannot manage this consciously. When confronted with a new group of people, who seem important to your world in some concrete way — you feel a sense of discomfort, like you’re failing to quite comprehend the world around you — until you give this new group a label. Imagine a mob of strange people come into a room, and an observer nervously asks “who are they?” You respond: “Oh, those. The are just blue-bellies.” Now, that observer is calmed. He perceives that he understands the intruders; his comprehension has managed the event and the source of discomfort is removed. In modeling this behavior in the computer program I am designing – it’s a very useful mechanism.

PrimateThinkingw

There is another, quite bizarre path down which I am presently exploring. It is possible (not possible as in physically, definitely could be the case – but rather just being open-minded to the possibility that it is plausible) – that certain of the things we are aware of as distinctly-labeled entities, are actually not distinct. That is to say: our mind regards them as separate things, but actually they’re just different views of the same thing. This may apply to regions of space, as well as to physical objects. Perhaps they ‘wrap-around’, whereby you look in one direction, and perceive something, and then you look in another direction, see another thing that looks the same as the first – and your mind gives it another label because it seems to be in a different place. But this time, that is a mistake. This might be considered analogous to standing within a hall of mirrors that amusement-parks used to have: you visually see multiple instances of you, or of someone standing near you. Your mind tricks you.

Remember, as Mother Nature brewed up your brain – it only designed it to deal with your immediate environ. Whatever is good enough for us to get by, to survive and procreate – that was enough. That applies obviously only to a very tiny scale of space and time, of speed-of-time, of scale of space, and with further substantial limitations. As we endeavor to broaden our understand of the Universe, we need to contemplate the possibility that our mental-model is a cage from which we will need to free ourselves.

Image of space with many points of light

A universe of many things

Posted in Artificial-Intelligence, consciousness | Tagged , , , , | Leave a comment

Ruby on Rails: a brief evaluation

CatsAreWonderfulI have used a lot of different computer-programming languages over the years. I’ve had some favorites, fussed at a few, produced some decent software with them.

The latest that I have endeavored to use, is Ruby.  More specifically, a website-creation framework known as Ruby-on-Rails.

This language and platform takes the prize:  It has been the greatest black-hole of time, the greatest source of frustration I have ever associated with a programming platform.

Not least amongst the frustrations, is the plethora of people who say they like it. Perhaps that is a phenomenon that results when a technology becomes fashionable for whatever reason – enough so such that everyone using it, is already sitting next to someone else who is well-used to it. I’m only being half-serious of course. I believe it is partly because I come from a systems-level software engineering background, with no small amount of experience with full-bore capable languages like C++, C#, Ada, and Java. I think perhaps I have developed a mental outlook whereby I depend upon the language to be well-defined, explicit and clear. With C#, for example, I can also look up where a given class or method comes from, how it expects to be used, and what it gives me. Always; even if it is poorly commented – I can at least find out how it is defined at the syntactic level.

Not so with Ruby. Variables may not exist until they are assigned to, and it is rarely clear where they come from. My toolset has no ability to jump from usage to definition, as you do in Microsoft’s Visual Studio. The definition of the language itself, is bizarre. And I don’t mean that in a good way. It’s not just a different syntax: the design of the language itself seems disjointed and ugly. What books I have found, what bits of online tutorials — are all leaving out huge portions of the story.

For example:

attr_accessible :variable1, :variable2

What does that do? Does it declare two instance variables? Or does it say that, if those two variables ever did happen to exist, that they would be assignable?

That looks ugly as shit.  Compare that with how you declare three instance-variables in C#:

public int _count = 0;
protected string _name;
private _id;

Here, I have used a convention whereby you can instantly know that something is indeed an instance variable because it starts with an underscore (not a requirement, just one popular convention). Your variables have a type. They always have that type (unless it’s a dynamic variable, but that’s a different story). The best feature of static typing is that if you misspell it somewhere when you attempt to use it, or just neglected to declare it at all – the compiler immediately tells you. What the compiler does not do, is watch you fumble and tinker and fumble and google and get frustrated, while it chuckles smugly.

With C#, you can see where these come from. What type they are. What value they’ll have if not assigned yet. And there is no question of how to refer to them.

With Ruby, how do you use an instance variable? Say you are inside of a class MyClass instance method. Do you use variable1 ?  self.variable1 ?  @variable1 ?  @@variable1? self#variable1 ?  :variable1 ?  or self[:variable1] ?

Yes, I did find some clues to some of those – but not a consistent, clear answer that worked without a lot of trial-and-error.

The syntax is horribly counter-intuitive. If you have an instance variable, denoted by @name, and within a method you assign a value to it but neglect the leading at-symbol, you have instead created an entirely new, local variable. No warning is giving – you simply have a bug that you will hopefully discover in your runtime testing.

You can create programs with Ruby. But I would suggest, Ruby (nor Rails) is not a tool for creating programs that need to actually work reliably.

Very important tip: Use a virtual machine (VM) !

Since I needed to use Linux as the development platform anyway, I created a virtual machine for this purpose. This is a good policy in general, for development. I use VMware Workstation 10, and in this case loaded it with Ubuntu 13.04. I used Sublime Text 2 for the text editor, and Thin for the development web server (Webrick evidently is broken).

I believe one reason Ruby-on-Rails (RoR) became popular was because it does provide some nifty generators for creating a minimally-functional CRUD program based upon some model. That is a good feature, because it jump-starts you with a working (although no-where near a presentable solution of course) website, that you can use as a starting point. You enter a command in the Linux shell, and it makes it for you. That was the only way I could create a program: I ran their scaffold generator to create an initial cut. Several times. Started over. Searched for other tutorials that actually worked (most don’t!!!). Then once I had a program that run without error, I took a Snapshop (VMware’s term for capturing the exact state of the virtual-machine) of my VM at each stage when things did work, so that I could revert back to it. Whenever things became a hopeless dead-end, I took notes, dumped the freak’n thing and reverted back to the most recent (working) Snapshot.

With such a vague, unclear language-definition at the core, how does a system survive? But wait – there’s more!

There are many parts with RoR. The tool-stack seems quite large, and then you must bring in plugins and gems to accomplish anything. With RoR you have to be very conscious of versions. Most parts do not work with the latest versions of the other parts. So you have bundles, or gemsets, or environments, or sandboxes. That is cool, to have the idea of a Gemset file which specifies what to bring into your project, and exactly which version. That, of itself, is more than just a nice feature – it’s essential to do anything. After you have spent weeks of long hours trying to discover what works with what (and which of the many parts simply do not work, never will, and perhaps never did) – the Gemset file saves you by freezing in place your selection of gems and versions of those gems.

That seems to be the essence of RoR.  It is a stack of parts, each and every one of which has it’s own personal project and community of developers behind it. If you are extremely lucky, enough of those parts will work, to arrive at a working solution. The language itself is very terse. And if you like concise definitions – that might please you at first, until you realize it is so unclear and vague that it makes no subjective sense unless you are intimately familiar with this specific platform.

I found that everything I coded, was basically copy-pasting snippets from samples. Which, although occasionally useful, were also the greatest source of frustration. For example, Stackoverflow has a lot of Q&A. Most of the Answers did not work. Or were flat-out typed incorrectly. Not a few — MOST. On that website I do not see how to down-vote an answer, which is a shame because most are clearly shit. That is a whole separate story: why anyone would consider it acceptable to respond to someone’s technical question, by typing up code-snippet without seeing whether it works yet, is beyond me.

Another problem might be in the nature of open-source software. There are too many script-kiddies shoving gobs of code into the project – code of very crappy quality. They often don’t document anything. Or they write incorrect documentation. And it’s frequently not compatible with each other. Perhaps because they have no skin in the game, – they’re throwing it in for free anyway — there’s little incentive to invest time to make it a quality product. There is no software engineering — it’s just gobs of junk code.

That phrase “You get what you pay for” is not always true. But with RoR I believe it does apply. This is not a toolset you want to employ just because it is free. I highly recommend you explore something like ASP.NET-MVC with C# and Microsoft’s Visual Studio: it is finely engineered and you acquire skills with a fine state-of-the-art programming language that handles many roles with aplomb. Or Java, or Python on Django.

With C#, for example, you’ll type more characters on the keyboard. The language is not as concise as Ruby. You have to declare your variables, and decide which data-type they’ll be. But only a fool bases his comparison on that alone. It’s better to type a few more characters, and have a programming construct that is clear and unambiguous, that you can come back to later and maintain and change and make use of, than to have a terse little set of characters that mysteriously works because you spent an obscene amount of time tinkering and googling to get it to.

I have noticed that there are websites claiming to measure the popularity of programming languages, based upon the amount of Google traffic or number of questions on Q&A sites.  I would suggest this is misleading. When I was first learning Java, or C#, I found it very easy to pick up a book, play around with the tutorials, and dig into creating stuff. With a clear syntax, clear semantics, and a little bit of Googling – it wasn’t long before I was quite comfortable with every aspect of the language and could create fairly complex software. With RoR the opposite situation prevails: a massive amount of googling is necessary for virtually every step of the way. I cannot overstate this point — RoR is so unclear and confusing that you will spend an obscene amount of time googling, trying tips and techniques that are incorrect, before making any headway.

So, Ruby-on-Rails:  the worst possible choice for a web-development platform. The scenario where it can work, I would say, is where you already have someone who is already quite familiar with it (preferably more than one, so that they can team up on it). I’m guessing that would probably be someone who starting learning programming with Ruby as their first language, so that they did not start out with a mental outlook that resisted Ruby’s quirky syntax and lack of solid parts. If you’re more than just a small one-project startup — if, for example, you have multiple software products to build or maintain, you may find that your RoR developers are worthless for your other projects – because RoR is all they know. With Java, or C# you have a language and toolstack that you can redeploy for a lot of other needs. This is not an authoritative assessment of course — it’s just an impression from a first experience with it.

Posted in Software Design, Uncategorized | Tagged , , | Leave a comment