This is a “quick” recap of my thoughts.
My 1st 3D modeling machine was a Quadra 800
What has been up with me some may wonder. Well, a lot. I moved, decided to pivot & am restarting my career. Oh course whenever you do that, you get a ton of recruiters emailing you with jobs from your old profession, that are no longer suitable to you. Thanks to life events, I can’t work the long hours demanded, and besides that — despite my deceptively younger looks — I’m not a spring chicken anymore. Along with that, I realized that I can’t do repairs of small devices anymore. This is somewhat sad. But considering that my iFixit kit has paid itself off at least 10 times over the years, it’s not that bad of an outcome. Another change is my outlook. Before April of this month, my view on life was that I had to clear my plate of everything put in front of me or let it pile up. However looking at my reading list, there are literally over 100 articles I simply bookmarked after the synopsis or intro that I never got back to. Add to that the countless languages (markup or compiled) I’ve looked at learning and we see a truly daunting list. I’ve decided that things will get my attention as they always have: as needed. The one thing I am putting on my plate over and over until I learn it is 3D. This follows my 2 decade old foray into 3D when I bought some, now defunct, program to run on my Quadra 800. It took hours to ray trace a simple render test. However now it is for modeling objects for 3D printing. While you will never hear me call myself a 3D artist, it is one skill I know I can pickup again. My skills are in a constant state of flux. Last year I spent recuperating from yet another person who overestimated their stopping distance and ended up plowing into my car and injuring me. The more things change…
Practice, Practice, Practice
When I am consulting with a client, and navigating on my machine they are absolutely stunned at the speed of me using just the GUI. I have to remind them: I’ve been using GUIs for 30 years — starting from the very first Macintosh, and using various OSes since then (from BeOS to X-windows and back again). Given my use “cross-training” and approximately 40,000+ hours (conservatively) of using practically every type of app, I’d think I would be an expert at efficiently navigating almost any app. As a side effect, I have also gotten very good at spotting good and bad UI. If I don’t know how to do something: I usually know the magic words and the search terms to use. If even I cannot find info quickly, then something about either your apps and/or your documentation is lacking.
The commercially available GUI is now over 30 years old. We all know that what was once a paradigm altering way that communications engineers, researchers & computer scientists could interact with their machine has firmly cemented itself in the landscape of interfaces, as the mice and trackpads that came with it. Initially the GUI was called a novelty that would quickly wear out its welcome by companies that have since staked everything on their misunderstanding of how a GUI should act. Now that a more common use paradigm is direct touch. The conventions useful & familiar with the desktop metaphor have been replaced by a graphic icon collection to open an app suited to the task. Again people who’s thinking is still bound by conventions of prior use paradigms that either work poorly or not at all without alteration to fit into new paradigms is hobbling the efficiency of their user base. The base-line of porting UI to a tocuh UI has been accomplished: where it was a double click to open, it is just one tap; where it was a menu bar window, it is now a navbar & bottom “tab/panel/view.”
However, Before this current paradigm shift happened, the GUI had already been mutating between versions & various OS platforms until new conventions were tried & failed or took root. Often multiple ways to interact are allowed in most desktop OSes, & between platforms some interactions are preferred, while other are simply cumbersome. Somewhere along the way the fundamentals of UI design were forgotten & exchanged for the slickest looking UI and usability took a backseat to aesthetics because the people who placed aesthtics fist didn’t realize that aesthetics and usability are tied together. Thus the interactions needed to perform an advanced task became unnecessarily cumbersome, and furthered the knowledge gap between the novice and the competent.
The Shine after 5 months. You can’t even see extra security I added from the front while wearing the Shine (scuff not included)
First off, full disclosure: Misfit gave me a Shine — not for review — but as thanks for spotting and letting them know about a minor error on one of their pages on the day they announced a related product. So, given that it was free, it was something I was grateful to receive, and established the goodwill of the people at Misfit. The thing is, I’m not exactly the type that monitors and logs everything I do. In fact, given my physical limitations (mentioned before), I can’t often follow a workout regimen to stay in shape anymore. However, I am naturally curious, and after almost 6 months using the Shine, I have been able to use it as a way to monitor my daily activity and adjust how much I eat. This review examines what is an almost perfect product at this price point from the PoV of someone that isn’t interested in (or can’t afford) the current smartwatch offerings. On this level, the Shine succeeds to offer a simple way to monitor daily sleep and wake activity. Read on to see how it accomplishes this.
Disruption, not the tech type that changes industry, but the “hey you got a minute?” or “Hey, this is your gamifying app (or actual game) here, can you stop what you’re doing and use me?” is as fatal to productivity of creative people, be they artists, musicians or engineers.
If you are a creative type, you might already understand that the altered mental state you reach is one of crystalized concentration. If not, then trust us on this: interrupting a person when in the state of creative “flow” (or whatever those of you that have given a name call it) can be devastating to productivity as much as a hard drive that dies without having a backup. I am currently reading a really good book that touches on concentration as a part of it called “Pragmatic Thinking and Learning” by Andy Hunt [@PragmaticAndy] (More on that later or in some other post…)
If uninterrupted time to focus is known to be such a boon to productivity, then why must every app (and their crappy “lite” version) automatically load itself into the notification center of your platform?
If you want to solve a problem people have been banging at for a long time, the last thing you do is look at it from the same angle as everyone else, and that is exactly what tech pundits are guilty of doing. Microsoft in its latest round of Windows 8 updates, seems to have further muddied its interface trying to be all things to all people.
Long ago, I wrote that paradigm awareness is paramount in designing a good UI. However, Microsoft, being blind to both accommodating and leveraging environmental differences decided that slapping the exact same UX on products for 2 completely different use paradigms would be easier for users to learn and more efficient. Based on my experience and user feedback, nothing can be further from the truth.
Finally the tech pundits are waking up to the fact that Windows 8 should not be all things to all people. But they are stopping short of the logical solution I mentioned years ago because they are still looking at computer users using the wrong categorization scheme: business/“productivity” and home/consumer/“casual”/“consumption.”
While I have often said that a lot of UI changes are simply eye candy, and add nothing important other than “bling” to a design, not all UI changes fall into that category. However, looking back, I noticed my posts have beat around this huge unaddressed important distinction of UI design that pretty much no company and very few active designers today seems to completely understand, judging from the latest and “greatest” products that are just as confusing for experienced users as they are for newbies.
While, we all seem to inherently understand some form of graphic design language, few aside from UI designers are conscious of it. And even fewer of the professionals understand this graphic design language has rules and conventions based on solid interaction principles. They seem to take for granted, that this control is a certain way without question, and either they use it improperly or worse, they break the convention. Both of these problems are caused because the UI designer does not know the reason behind the convention. I am sure many UI designers will rebuff me — and know the reasons behind certain choices, but not all. The problem is, if the designer has read literature or learned UI from someone else that omitted the explanations and reasoning behind the conventions, they only have half an education.
Badges? We do need some stinkin’ badges. Gracias!
HTML5 is here in full swing. With portions of CSS3 reaching recommendation candidate status and ES6 coming, it is critical for web developers to continue to learn not only the new technologies, but also current best practices. Because I try to do the right thing, I went to the HTML5 Developer’s Conference in SF. My Editor was in town and we ended up meeting and while I was enthusiastically telling him about it, he asked me if I would write up an article for DiceNews about what I found.
That would be great, I said. So, I pounded out a quick one the next day. You can read the article by clicking this link http://news.dice.com/2013/04/15/lessons-learned-at-html5-dev/if you so desire.
Feel free to comment here or there. But please forgive my generalizations. I know people are making full use of animations, and other modern features, but many more are not. And yes, I realize sometimes a page refresh is desired too. With that said, enjoy.
App.net might look like just another social service to some. And, in fact, it currently looks very much like Twitter was when it started: It is just a lot of tech-savvy people talking freely and enthusiastically about app.net and whatever strikes their fancy: No celebrities promoting themselves, no ad-spam, no fake users, no incredibly stupid posts—although there are some stupid posts, there’s no one stupid enough to post public calls to kill government officials as one woman who has disappeared did. App.net is just a lot of signal with very low noise.
I get at least a few invites each month to join a new SoNet. The invites usually get a tossed into the trash almost immediately. Few get me to look at the site. But that’s usually it. Even if I do sign up the to site, I often let it languish and simply forget about it until they start spamming me to use their site, “log in with…” or want me to link my other SoNets to it.
Paying not to Share but Selectively Share
App.net is 180° away from ll of these sites though, because their interests align with my interests:
Yet another major version release of OS X is out, and I have talked to a few people about it. For the most part, aside from a few “.0” bugs, the response has been pretty positive. I decided to upgrade after I noticed a vast majority of the apps I use regularly released updated Mountain Lion compatible versions within days of its release. Also, there were no reports of data loss (not that I have to worry about that because of the religious fanatic level of backups I have) or any major problems from people that upgraded right away.
My Advice for Upgrading to Mountain Lion: 10.8
So, I followed my own advice previously posted about upgrading. I’ll recap it here. In short: