How to selectively run Keyboard Maestro macros in a synchronized environment

A challenged I’ve always faced in running Keyboard Maestro on multiple Macs, is the maintenance of macros that are common to all, i.e. when updating a macro on one machine (say, changing the API keys of a service I’m accessing), I have to remember to go make the same updates on the others.

Keyboard Maestro provides a solution to this problem, by allowing you to synchronize your macros across multiple machines. Their implementation, however, and in contrast to say, Hazel’s folder-scoped implementation, is all or nothing—meaning that you can’t have macros on one machine that don’t exist on the others. And that can become a problem, especially with macros that are scheduled to run periodically.

Keyboard Maestro provides two approaches to address this problem:

The first is the ability to, for any given group (folder) of macros, to click, “Disable on this Mac”.

Unfortunately, there’s a number of shortcomings to this option. For example, anytime you add a new group of macros to a given machine, you have to remember to potentially go around disabling them on the others.

The second approach, and better in my opinion, is to condition the execution of any macro on the UUID (universally unique ID) of the machine on which the macro is running. Here’s an example of how this works.

The first step is to maintain a macro that determines the UUID of the current machine, and defines a list of named UUIDs for machines you’ll later be referencing. I run the following macro daily, and whenever I add a new machine, I’ll add its UUID to the list of named machines by temporarily running the disabled action that copies the current machine’s UUID to the clipboard.

With this in place, I can now condition the execution of other macros by machine. The following is an example of a macro that runs daily, and quits FaceTime on my MacBook Pro.

If I wanted this macro to run on two machines, I could add a second UUID check, and change the condition to “any”.

While this approach requires additional effort in creating your macros, it provides big benefits in being able to manage from a single machine, the conditioned execution of macros on all of your machines.

How to perform a currency lookup in a Numbers spreadsheet

Apple recently introduced in the Numbers spreadsheet the ability to pull live stock prices from the internet, making it now possible to track portfolio performance.

To access this feature, you use the STOCK function:

Since the feature pulls data from the Yahoo finance service, the symbols you should use for reference are those used at Yahoo. For most stocks that I’ve come across, the symbols are the same as those used at Google, but they do seem to vary slightly for non-US stocks and currencies.

To track the Euro/USD exchange rate, the symbol used at Yahoo is “EURUSD=X”, but using this symbol in the Numbers STOCK function returns an error. The solution, as I found in this discussion at Apple is to use the CURRENCY function:

Email Verification

I’m the owner of a Gmail address that bears my name, in the form first.last@gmail.com.

Many others who share my name, have addresses that are slight variations of mine, e.g. first.last2@gmail.com or first3.last@gmail.com, or even first.p.last@gmail.com. You get the idea.

Often when these people signup at websites, they mistype their email address—and accidentally enter mine.

On account creation, modern websites send a verification email to the registered address, containing a link that the user must click before they can use the service. This verification-loop confirms that the person actually owns the email address they entered. You’ve probably experienced this yourself.

If I’ve directed you to this article, it’s likely because your company does not verify email addresses, such that I’m currently experiencing one or more of the following problems:

  • I’m receiving notifications, alerts, user-related communications (often containing personal data), and I’ve been unable to stop them because either:
    • Your communications don’t have an unsubscribe link, or
    • There is an unsubscribe link, but requires login to confirm
  • Your service doesn’t allow me to reset the account password simply by knowing the email address, i.e. it’s requiring me to provide some user-specific information I wouldn’t know.

In other words, I am stuck, have wasted time that I shouldn’t have wasted, and need your help.

But just as importantly, I need that you get the message to whoever in your company is responsible for the website, insisting that they need to add email address verification to the account creation process, to prevent this from happening in the future.

Thank you.

Decommissioning old email addresses with FastMail

The first business email address I used, [email protected], now almost twenty years old, is the source of 95% of the spam I receive. I no longer use this address, and would simply like to kill it, but every now the arrival of an important message reminds me that decommissioning it could result in missing something important.

Our company uses FastMail for email hosting, and the account has several domains aliased, including makalumedia.com. Chatting with FastMail support, I discovered that I could use their advanced “Sieve” support to effectively kill the address without risking to miss important emails.

Here’s how I did it:

  1. In Mail.app, I created a smart folder that collected all mail addressed to [email protected] during the past 10 years (and which is not in my junk mail folder). This is the starting point of my list of “known senders” from whom I’ll continue to receive mails.
  2. I exported this smart folder to a mailbox file on my Desktop
  3. I then used the Mac app “eMail Extractor” to parse a list of all email addresses found in that file.
  4. I then used BBEdit to clean up the list, leaving me with only a single copy of unique {domain}.{tld} entries.
  5. I then created the following Sieve rule in my account at FastMail

This sieve triggers on any mail received on my old makalumedia.com addresses. It then checks if the sender is in my list of known senders (which in my real sieve is much longer than the above). If the sender is not in that list, it rejects the mail with a message to contact me through my blog to get my current contact information.

Since setting this up a few days ago, my spam has been reduced by probably 90%. The few that have gotten through were from senders on my known-senders list, and so I went and removed them from the list. So over time, my known-senders list will get cleaned of the few spammers who were present in the original list.

All in all, I’ve been super happy with Fastmail. Their service is well-designed, technically solid, and provides just enough geeky flexibility to do advanced stuff like the above. Well worth the money!

Support Authentication

When I signup for an online service, I like to use an email address that’s unique to that service, i.e. something like [email protected]. Email for my-special-domain.com is then configured to forward all incoming mail to my personal email address.

This allows me to do two things:

  1. Know which services sell my address on to third-parties. (If I start getting spam on this domain, I can figure out where it came from.)
  2. Kill any address for which incoming mail gets out of hand

This works fine, except for one problem, and a problem that shouldn’t exist:

Often when emailing [email protected], I’ll get a reply back indicating that—for “security” purposes—I must email support from the address associated with my account at the service.

What’s the problem with that? The problem is that the “from” address of my support enquiry provides absolutely no authentication or security at all, since email headers are dead-easy to forge.

Therefore, if a service wants to authenticate support conversations, there’s only one way to do it, and that is to provide an internal messaging system accessible only once a user authenticates into the service’s website. (Most financial institutions have this, since getting user authentication right is particularly important to them.)

I decided to post this to my blog, in order to have something I can conveniently point to in the future, when trying to convince these services that they’re misguided and causing unnecessary inconvenience to users who prefer to use throw-away addresses on their accounts.

Disappointing interaction design at Apple

Long-time Apple customers became accustomed over the years to thoughtful and delightful interaction design. As Apple has grown—and perhaps as Steve Jobs has passed, and Jony Ive’s involvement seems to be sunsetting—cracks have begun to appear.

Here’s two examples I ran into just this morning.

Enabling Do-not-disturb in Notification Center

Trying to enable “Do-not-disturb” in Notification center, I ran into to problems:

  • First, it’s not clear to me which of these tabs are active.
  • Second, it took me a while to figure out that the Do-not-disturb control is only exposed when scrolling down in the notification. There’s no UI cues at all to help with discoverability here.

Assigning a photo to a contact

In the Contacts.app, when trying to assign a photo to person, you’d think the picker would default to the contents of your People album, and provide a usable UI for finding and selecting someone (with sensible fallbacks in the case you never configured any Faces.) Instead, we’re dropped into the root level of the photos hierarchy, and by the time we navigate to the People album, are left with a list that’s only capable of showing the first few letters of first names.

Isn’t there someone at Apple whose job is just looking around for these kinds of details, that are such an integral part of the brand we’ve come to perceive?

The difference between developers and product designers

Our company is bidding on the re-development of an existing product that has outgrown the technical framework on which it was originally built. The customer has received a handful of offers, and the range of costs and technologies found in those proposals is causing him considerable uncertainty in his choice.

In response to that uncertainty, we’re nudging him to look beyond whether to use Ruby on Rails (our choice), Meteor or Laravel since, at the end of the day, the success or failure of his business will not hinge on technology. Instead, we’re encouraging him to consider the difference between a developer and a product designer, and focus on the critical question of who is capable of creating a product that will ultimately prove successful to his business.

To illustrate, let’s consider what happens when you signup for an account in their existing platform. Upon first login, you see something like this:

The original specifications for this product probably contained something along the lines of, “The system will have an accounts screen that lists all colleagues associated with the organization.” The developer then went about the task of satisfying the requirements, thinking:

When the account screen is accessed, I’ll query the database for all colleagues. And to account for the case there are no colleagues, I’ll show the message, ‘No colleagues found’.

Most developers focus on requirements and technology—i.e. the database query, the message to show if the query returns nothing, etc.—and fail to reflect deeply on the actual use of the product they’re building. In this case, the developer didn’t consider the one instance—and a critical one in terms of product success—of an empty database query that every single user will experience—Their very first engagement with this screen as a new user.

As a new user in this system, I’m left disoriented and confused:

  • Where am I, and what am I supposed to do?
  • The “No colleagues found” text seems like an error message. One minute in, and I’ve already done something wrong?
  • “Show blocked colleagues?” What is a blocked colleague? If I click that, the only thing that happens is that the text changes to “Hide blocked colleague”.

Had I created this account as a potential new customer wanting to “kick-the-tires,” there’s a good chance that I’d leave and not return, since experiencing friction in my very first interaction with the product is probably a good indication of what’s to come should I stay.

A good product designer is continually putting himself or herself in the shoes of the user, taking into account their context, their mindset, their knowledge and expectations, and looking to resolve any aspects of interaction with the product which potentially introduces friction.

In this example, a good product designer would identify the need for a “blank slate” version of the account screen, that’s welcoming and orientating for first-time users. Perhaps something like:

And therein lies the enormous difference in value between the average developer, and the very few who are good product designers. The former creates collections of features that “satisfy requirements”, while the latter creates coherent, effective and ultimately successful products.

Why keeping it simple would be a better choice for TransferWise

Overview

In the world of web application development, we sometimes face technical decisions whose trade-offs extend beyond the technical. Those non-technical trade-offs can be subtle, and perhaps difficult to identify, yet critical to the business.

In this article, I want to highlight as an example my experience with the TransferWise payment system, in which technical decisions ultimately work contrary the core of the product.

Background

In the early days of web applications, browsers like Firefox and Safari could only render web pages whose contents were structured in HTML and possibly styled with CSS. Any “logic” that formed part of the application had to be executed on the server.

Whenever you clicked a link on a screen, you’d experience a page refresh as the browser sent the request data back to the server, waited for the server to perform any necessary checks and calculations related to the request, and then your browser would display the HTML/CSS that was returned by the server.

So in those days, your browser only displayed things; any “thinking” happened on the server.

Time progressed, and browsers gained the ability to execute JavaScript software, thereby opening the door to implementing “logic” that gets executed within the browser client itself.

One of the most common first uses was in signup forms, as the browser could check that your entered-twice passwords matched, without requiring a page refresh and request to the server application.

Things got even more sophisticated when the browser could make a server request that’s transparent to the user. You’ve probably seen that when entering your username in a signup form, seeing a small spinner appear to the right, followed by a green checkbox informing you that, “Yeah, that username is still available!”

As “front-end” technologies continued to evolve over the years, we’ve gotten to the point where entire web applications are implemented in JavaScript, and run within the browser.

So, today, a fundamental decision to be taken by a developer when he or she implements a web application is:

Should I implement this logic on the server, or in the client?

The argument I want to make in this article, is that often this decision should be taken by the organization, and not simply left to designers and developers.

Context is everything

The benefit of using client-side logic is generally a smooth and seamless user experience, since the user doesn’t have to wait for page refreshes. The trade-off, however, is the risk of bugs in the user interface, since the JavaScript and rendering engines between different browser (and even different versions of the same browser!) can vary considerably.

There are some application contexts in which the risk of interface bugs is compensated by the value of a seamless and interactive user interface:

  • For example, if you’re developing a fast-paced interactive game, it could well make sense, in the interest of a smooth user experience, to implement the entire product as a client-side application.
  • Or let’s say you’re implementing product that’s likely to be used by your customers several times daily. In that case, saving a few screen refreshes might materially improve the experience when compounded daily over the period of an entire year.

At the same time, there are some application contexts in which a seamless user interface does not compensate the risk of exposing the user to interface bugs. And, here, I want to highlight an example of a company that has absolutely taken the wrong decision in this regard.

Disruption of an industry

In the past, it was terribly expensive for me to pay European contractors from my American company. First, the transfer itself would cost about $30. But then, I’d lose over 3% with respect to the market rate when the bank would convert my USD source funds to the destination currency of Euro.

TransferWise completely disrupted the market of moving and transferring money internationally, charging a fraction of what banks charge. They do this by taking advantage of volume to avoid even having to make transfers, i.e. if Customer A in the US transfers $100 to someone in Europe, and Customer B in Europe transfers the equivalent of $100 to the US, TransferWise can make the two transfers happen simply through off-setting accounting entries, using Customer A’s money to pay Customer B’s recipient, and vice versa.

What is the TransferWise product?

So what is the TransferWise “product”? If you ask me, it’s the saving of tremendous time and costs when making an international transfer.

And here’s where TransferWise have really messed up. They additionally view their “product” as the experience of making a transfer, and from a front-end technology perspective, they have decided that a slick user interface compensates the risks of exposing their user to bugs associated with the heavy use of front-end technologies.

To be specific: The process of making a transfer with TransferWise involves five steps:

  1. You specify the source and destination currencies, and the amount to be transferred.
  2. You choose who is sending the money (in case you happen to have both a personal and business profile on record).
  3. You choose a recipient from a list of existing contacts, or create a new one.
  4. You choose how you’ll get the money to TransferWise, e.g. through an ACH or wire transfer from your bank.
  5. You review the transaction, and confirm if everything looks good.

Looks simple enough, but there’s quite some logic that has to happen:

  • You have to compute the amount of the conversion from the source to destination currency, based on the current rate.
  • You have to alert the user in case that rate expires during the process of setting up the transaction (i.e. if they take too long.)
  • You potentially have to walk the user through the “new contact” workflow.
  • You have to flag the user if the chosen receipt doesn’t have address details on file.
  • You have to walk the user through the “link new bank” workflow in the case they want to do an ACH transfer with a bank that wasn’t previously associated to their account.
  • You have to exclude the ACH option if the daily limit has already been exceeded.

So the process of initiating a transfer can get surprisingly complicated.

TransferWise’s flawed decision

Well, TransferWise decided to implement the entire workflow in one single web page, in which each step in the process is contained within its own component, that opens and closes accordion style.

The consequence of this approach, as opposed to pushing all the logic and checks to the server in page refreshes, is that during the entirety of my use of TransferWise, over the past few years, I have ran into user interface bugs probably more than 50% of the time.

And sometimes we’re talking about showstoppers—i.e. bugs that, in the name of a slick user experience, actually prevent me from making a transfer!

For example, the day that the confirmation button simply wouldn’t activate. Or the day when state wasn’t tracked across components and the confirmation button didn’t provide on-click feedback, such that multiple clicks of the confirm button suddenly skipped you multiple steps ahead in the process, leading to all sorts of chaos.

Or, what happened to me today…

I use TransferWise once a month to pay my European contractors. Since the only thing that changes each month is the amount I pay to each, I could really use “payment templates”. But since those don’t exist in TransferWise, the next best thing is to click “repeat payment” on some previous transfer, and then change the amount.

But it would seem that this isn’t the intended purpose of “repeat payment”, since clicking the option takes you directly to the confirmation component of the transaction screen. You can click back into Step 1, in order to change the amount, but I suspect my particular use of this feature is what caused me to see this, when finally returning to the confirmation component:

Try what again? Going from Step 3 to Step 4? Everything looks fine. What’s the problem?!

Neither refreshing the page, nor clicking “Confirm” removes the error message or allows me to proceed As with most of these UI errors at TransferWise, it would appear that I’ve reached a dead end.

But, in this case, guess what? When I return to my accounts page, I see that the transaction was successfully processed. So the error I was shown—i.e. the one that blocked the whole process—is itself erroneous!

Again, it’s all about context

So let’s backup and think about this.

  • Once per month I need to make some transfers.
  • I use TransferWise for this because they are fast, and save me a lot of money.
  • I do not use TransferWise because their transfer creation workflow is better than my bank’s. I don’t give a shit about that. If this were something I did 12 times per day, then maybe; but this is something I do 12 times per year.

Of course, it’s not impossible to have a reliable application that’s front-end heavy. It’s just that it’s much, much easier to have a reliable application that’s not. And in the case of TransferWise, a slick front-end doesn’t contribute to the core value proposition of the product, and my own experience demonstrates that there’s definitely inadequate value compensation in unnecessarily taking the risk.

Conclusion:

The very very last thing I want to experience in this context, are bugs that prevent me from making my transfer.

For the past year or so, each time I’ve experienced a UI bug, and have reported it to TransferWise, I’ve also take nthe opportunity to encourage them to reduce their dependence on front-end technologies, and give priority to making the process of creating a transfer as reliable as possible. But each time, missing the forest for the trees, their team have instead focused on trying to track down the particular bug I’m reporting (Have you tried that in Chrome?)

And so my hope is that through publishing this article, the larger issue might cross the radar of someone in TransferWise management, who’s in a position of considering the broader product goals.

1Password for Teams and Families incompatible with VPNs

One of the services for which I’ve truly been happy to pay is 1Password for Families, which allows my wife and I to centrally manage information vaults that are shared among ourselves, and among our kids, across all our Mac and iOS devices.

Some time ago, I wrote about how I secure our home network with a VPN. After doing that, we began having to frequently respond to CAPTCHAs when accessing any website that uses the CloudFlare security platform, as CloudFlare (understandably) doesn’t trust the IP addresses of the Private Internet Access VPN service that we use. This is an annoyance, but certainly something we can live with.

Unfortunately, however, I recently discovered that all of our 1Password applications (iOS and Mac) have stopped syncing their data with 1Password’s servers. And to make matters worse, the apps don’t provide any feedback to the user that synchronization has failed! It was only after removing a Families account from one of the devices, and trying to add it back did I finally see a “No response from server” error.

My experience with CloudFlare-managed websites immediately let me to suspect that 1Password had their client API sitting behind CloudFlare, and an email to 1Password support confirmed this:

After reviewing the situation with his colleagues at 1Password, however, he then followed up to say that, sorry, but it looks like their service is just incompatible with Private Internet Access:

Right now, because so few users are affected by this, 1Password’s response is just: “Sorry, you can’t use our service if you’re going to use a VPN.” This seems short-sighted for the following reasons:

  1. The problem doesn’t only affect users on Private Internet Access IP addresses. It affects users on any IP address that CloudFlare distrusts. Currently that’s at least PIA users, and almost certainly includes other popular VPN providers. But over time, one can certainly expect that set of IP addresses will expand.
  2. More fundamentally, when accessing a website, CloudFlare provides a means by which a legitimate user on a distrusted IP address can successfully get through—by responding to a CAPTCHA. In other words, there’s a model in place by CloudFlare that anticipates false positives. If you’re going to put your software API in front of CloudFlare, as 1Password has done, then you must also engineer a model and user experience that accounts for false positives. (Perhaps CloudFlare offers a mechanism to surface a CAPTCHA like mechanism to the human user of an app that’s getting trapped on its API by CloudFlare.)

Hopefully, the team at 1Password will reconsider the situation, and find a solution.

How to manage a Tomato router via the CLI using Keyboard Maestro

As I wrote about a few weeks ago, I have my home network connected to the internet through a VPN router, running the Tomato firmware. Although the setup works great, I did run into two issues which I needed to detect and resolve programmatically, using Keyboard Maestro (KM):

Rebooting the router

The router frequently hangs—about once every few days—and requires a reboot. Manually logging into the web interface to click the “reboot” button gets tiresome, and so I decided to see whether I could automate this with Keyboard Maestro.

I have KM running on a Mac mini whose ethernet interface is connected to my VPN-protected LAN, and whose wifi interface is connected to my ISP’s router. The wifi interface is configured as default in the Network Settings preferences such that all internet traffic is, by default, routed through the ISP’s router. (This is to provide Slink-based remote access to my home network.)

So the first problem to solve was how to test internet access on the non-default ethernet interface? Fortunately, the gracious KM author, Peter Lewis, discovered that the ‘ping’ command supports an option (‘-b’) to specify the network interface.

Now that I could check if the router was down, the next problem to solve was programmatically rebooting it. The Tomato software, being a Linux distribution, supports SSH access, and Peter pointed out that if I install my SSH keys on the mini, KM could then login to the router without a password. That, and a little Googling, allowed me to figure out the KM text script needed to reboot the router via SSH.

Putting this all together, here’s the KM macro (configured to run every 5 minutes) I created to test if the Tomato router is down, and reboot it if so. (It’s configured to run every 5 minutes.)

Now, you might be wondering what the ROUTER_REBOOTING variable is for. Turns out, there’s another Tomato-related issue I also solved with Keyboard Maestro.

Restarting the router’s VPN client

The Tomato router supports two VPN clients, VPNClient1 and VPNClient2. I have client 2 connected to a US-based VPN server, and route my AppleTV through that, allowing me to watch content that is IP-restricted to the USA. For minimum latency, though, I have client 1 connected to a server in France, and have it configured to route all other traffic on my home network.

Problem is, when the router boots, and perhaps due to the order in which the two clients start, all traffic ends up getting routed through US-based client 2. To fix this, I just need to stop and restart client 1.

To address this problem, I created another KM macro that that checks the geo-location of my external IP address, and if it’s not “FR”—and if the router isn’t currently rebooting; hence the ROUTER_REBOOTING check—then it restarts the VPN client 1.

My awful experience installing Windows 10 in VMWare Fusion 8

In reviewing Lance’s performance at the Spanish national championship this past weekend, the GM trainer from Andalucia strongly encouraged us to buy “ChessBase” as a tool to keep up with the latest in opening theory. Since Lance already runs Windows 7 in VMWare Fusion—in order to run PlayChess and TeamSpeak—I didn’t expect there to be any issues installing ChessBase (which is only available for Windows.)

I was wrong. Trying to install ChessBase in Windows 7, I got an error that some C++ runtime was missing. I downloaded the runtime from the link included in the error message, but it wouldn’t install either.

Not wanting to waste time on all this, I figured the best way forward would be to just update to the latest Windows—i.e. Windows 10. And so began the following nightmare:

  1. When you go to the Microsoft store to buy Windows 10, you’re presented with three options—(1) Free upgrade for Windows 7, 8, or 8.1 (2) Buy Windows 10 (Download), and (3) Buy Windows 10 (USB – English). (I’m not sure why “English” is listed on the USB option…)
  2. Here’s what you see when you click the free upgrade option—a screen that suggests you buy a new PC, and provides zero information about how to upgrade. Heavy sigh, but having to jump through hoops to get something free didn’t strike me as surprising.
  3. Again not wanting to waste time, I decided to just buy the thing. And the purchase process turned out to be a lot more straightforward than the free upgrade process, as expected.
  4. After my purchase, I had to choose which version to download: Windows 10, Windows 10 N, Windows 10 KN or Windows 10 Single Language. Of course, there’s no explanation of what the differences are, so I just rolled the dice and chose the first.
  5. Then you have to choose “Home” vs “Pro”. Again, no explanation of the differences, so I just chose “Home”.
  6. Then you have to choose 32-bit or 64-bit. You’d think Google could help with this, but not really. Rolling the dice again, I just went with 64-bit. Bigger is better, right?
  7. I was then given a download link to an .iso file, and product number. I downloaded the .iso file, and used it to start the process of creating a new VM in Fusion 8. Fusion asked for the username, password and product number—all of which Windows later asked for again.
  8. When the Windows 10 installation window opened, it asked for the product number. I entered mine, and was told the number was invalid. Of course. After a bit of Googling, I learned that you actually don’t need a product number to install Windows 10 (Was my purchase for nothing?) so I clicked, “I don’t have a product number”.
  9. The next screen asked if I want to do an “Easy Install” or a “Custom Install”. According to Google, one shouldn’t touch the Custom Install!
  10. Clicking “Easy Install” led me to a screen saying that I’d booted my Windows machine from “Windows Installation Media”, and that I needed to disconnect that, reboot windows, and then re-insert the media when prompted. WTF!?! Now, you would think that somebody else would have ran into this, and you’d also think that VMWare Fusion themselves would have run into this while installing Windows 10, but the internet offers no solution to this problem.
  11. In desperation, and feeling I’d hit a complete dead end, I decided to give the dreaded “Custom Install” a try. I clicked that, surprisingly wasn’t asked to make any custom choices, and the Windows 10 installation proceeded to complete successfully. Un-believ-able.
  12. In order to get reasonable integration with your Mac, the first thing you have to do when a new VM boots is install “VMWare Tools”. Unfortunately, for me, the “Install VMWare Tools” menu item was grayed out. Google said the problem is that VMWare Tools requires a virtual CD-ROM device to be attached. (Why on earth?!?…) Unfortunately, in my case, there was no way to add a CD-ROM to the VM, because neither my MacBook Air nor Lance’s iMac have a physical CD-ROM! Trying to add one anyway using the “Auto-Detect” setting led to a boot error, “Can’t attach to the Sata 0.0 device”. And again, unthinkably, neither the VMWare website nor Google could seem to help!
  13. The solution, as I eventually discovered, was to manually download VMWare Tools (which of course comes with no README; just a bunch of .iso files), attach the Windows 10 VM’s CD-ROM device to the “Windows.iso” file included with the VMWare Tools manual download, boot the VM, and then install VMWare Tools manually from the attached “virtual CD-ROM”. Apparently, this was only needed on the first installation of VMWare Tools, and that in the future it’ll be able to upgrade itself without a virtual CD-ROM attached. We’ll see…

At this point, almost five hours later, I could finally install ChessBase under Windows 10, and provide it access to our shared network device.

To me, it seems absolutely crazy that it hasn’t occurred to anyone at VMWare to write up a tutorial documenting what I imagine is a common use case of someone wanting to purchase Windows 10, and then create a Fusion VM, with VMWare Tools installed.

Update—After posting this article, a couple other observations came to mind, illustrating just how crazy this Windows world is:

  • When you install MacOS, you’re shown a progress bar. The progress might not be accurate, but at least you’re shown the visual indication that something is happening. When you install Windows 10, you get a screen that shifts between dark and light blue (is it breathing?) and says, “We’ve got some great features waiting for you.” It’s not really clear that something is going on in the background. In fact, at some point, I clicked the screen just to make sure it wasn’t waiting for me to do that to continue!
  • The biggest hilarity happened when installing ChessBase. The first time you launch the app, it asks you to enter its product code. That’s normal. What’s not normal, though, is that it also asks you to respond, on the same screen, to a CAPTCHA! Can you imagine? An installer with a CAPTCHA! But it gets worse. All the letters in the CAPTCHA are capitalized, and the input field auto-capitalizes whatever you type in, which, OK, seems to make sense if they want to remove case-sensitivity from the operation. But here’s the thing—if you type in a lowercase letter, even though it gets upper-cased in the input-field, the lower-case letter gets sent to the validation, and IT IS case-sensitive! So even though it looks like you’re submitting an upper-case letter, you’re not! Insane!

SendGrid made things right

Update — Readers will note that I’ve changed the title and URL of this article, and that’s because shortly after posting it, representatives of SendGrid reached out, apologizing for the situation, explaining that my situation isn’t what they intend, and offering to make it right.

All-in-all, barring what happened this morning, we’ve always had good experiences with SendGrid, and their product is really well designed, and so I’ve decided to continue giving them our business.


My company Makalu was engaged by a US educational non-profit to develop an online platform called “Letters 2 President,” through which America’s youth can publish letters to the candidates of the 2016 presidential election. While the platform is under development, a website was established to inform about the project, and start taking preliminary signups from schools, libraries and other organizations wishing to participate.

http://www.letters2president.org

Most web applications these days outsource certain functions to third-parties. For example, it’s typical to use Amazon S3 for storage, CloudFlare for content distribution and site protection, and in the case of sending transactional emails, we’ve tended to use SendGrid.

Until now, that is. After today, we’ll no longer use their services, nor will we continue to recommend them to our customers. Here’s why…

For our project, we need to send notification emails to our customer whenever new applications arrive from organizations wishing to participate. We need to send notification emails to organizational administrators when group leaders create accounts. And we need to send notification emails to group leaders whenever a student creates or modifies a letter to be published on our site.

That’s why we need a transactional email service like SendGrid.

As usual in our projects, we create dedicated accounts with these third-party providers, as opposed to using our own Makalu accounts, so that when a project is finished, we can hand over everything—including provider accounts—so that the customer is free to operate their project without any dependencies on Makalu.

And in that regard, this morning I tried to setup a SendGrid account for use in our Letters 2 President project.

Ten minutes after creating the account, I received a notice from SendGrid that based on their review of a broad range of data points, our provisioning request had been rejected.

A rejection based on an automated data check process didn’t come as a surprise, for a number of reasons:

  1. Although the account was created in the customer’s name, the email address I used when setting it up was a Makalu address, in order that, until project handover, we can receive all the various confirmation and related emails from the service.
  2. As our office is located in Europe, the IP address that SendGrid saw on the request was outside the United States, and not corresponding to the business address specified in the account creation process.

I imagined that a simple email could clear the matter up, and so I replied to the rejection notice, explaining the purpose and nature of our project, explaining who’s involved, explaining the reasons for the checks I imagined triggered the rejection, and offering to answer any questions they might have in order to get the account provisioned.

Another 10 minutes later, I received a cold and unfriendly follow-up saying thank-you, but based on reasons that won’t be disclosed, our account will not be activated. Just like that. No chance of a discussion. End of story.

And, adding insult to injury, their note ends with the sarcastic-sounding, “We wish you the best in your future endeavors.”

I completely understand why a transactional email company has to be careful in the provisioning of accounts. We all know how big a problem spamming is. But I can’t understand at all why a company would be completely unwilling to even engage with a new customer who presents a clear case for the legitimacy of their use of the service.

So that ends any current and future business relations we’ll have with SendGrid. Fortunately there are many other providers of transactional email, who’ll perhaps enjoy the exposure when we later publish about the building of this exciting new platform.

How to switch wifi networks with Keyboard Maestro

In a recent blog post I explained how I secure my home network with a VPN. In that article, I also explained how I enabled external access to my home network, using the Slink software running on a Mac mini server, whose primary network interface is wifi connected to my ISP router, and second network interface is ethernet connected to my home gigabit switch.

This setup works great, but it did require solving a tricky problem:

My home wifi network (created by the AirPort Extreme) is called “Hacienda”, and the wifi network created by the ISP router is called “HaciendaOlive”. Since I want all my home devices connected to Hacienda, that network is given first priority over all other known networks on my iPhones, iPads, etc.

The problem is that that network priority list propagates to the Mac mini (and all my devices) via iCloud, and so anytime there’s a network interruption or the machine reboots, the Mac mini connects to the Hacienda wifi network (instead of HaciendaOlive)—which of course kills my external access to that machine.

What I need is that the mini, and only the mini, has HaciendaOlive set as its highest priority wifi network. But this doesn’t see to be possible, unless I’d be willing to disable iCloud on that machine.

My solution to this problem was a Keyboard Maestro macro which runs every five minutes, checking whether the computer is connected to the HaciendaOlive network, and if not, switching it to that network. This required researching some obscure AppleScript code, and so I thought I’d post the macro here for the benefit of others searching for how to switch wifi networks using Keyboard Maestro. The blurred text in the image, is the wifi network password.

Enjoy!

How to protect your home network with a VPN router

In this article, I describe how I added security to my home network by installing a router that directs all internet traffic through an encrypted VPN connection. The adventure includes my experience with the FlashRouters company, the Tomato router firmware software, an OpenVPN connection to the Cloak network, the Linksys E2500 router and the Netgear Nighthawk R7000 router.

Continue reading How to protect your home network with a VPN router

Using WordPress redirection plugins to create easy-to-remember social links

I’ve never been good at remembering my social media URLs. Am I “dafacto” there or “mhenders”? At Facebook, where neither was available, what was that URL stub I chose? And doesn’t LinkedIn include something like /i/ or /in/ in their URLs?

Well today I solved that problem by using the Yoast SEO Premium WordPress plugin’s “redirect” feature (also available in the free alternative, Redirection). Now, all my social URLs are easy to remember:

Woot!

The importance of external bootable backups

This morning I posted an article about some CrashPlan-related issues discovered when migrating my wife’s dead iMac to a new machine. Another lesson learned in that situation was about the importance of external bootable backups.

My wife’s old iMac, dating back to 2011 I believe, had an internal 256GB SSD and a 1TB internal hard drive. Back in the day, I thought I could improve her desktop tidiness by doing without an external drive, and creating a 256GB partition on that 1TB drive, for the purpose of maintaining a bootable backup.

What I didn’t consider at the time is what actually happened last week—green bars suddenly appeared on her screen, followed by a shaking and shifting of the image, increasing in frequency until the whole screen went white—and the machine shut down. And then upon reboot, the whole ordeal would start again!

Evidently the machine was dying, and it occurred to me then that the only bootable mirror I had for migrating to a new Mac was the hard drive inside that dying iMac!

Since the bulk of the computer’s files lived on the other portion of the 1TB drive, managed by BitTorrent Sync, the start drive itself contained relatively little data. So I had hopes that I could keep the machine booted long enough for Carbon Copy Cloner to mirror the startup drive to an external USB drive. Lucky for me, after a third reboot, the machine stayed up long enough—barely!—for CCC to finish its backup. The machine repeated its meltdown literally seconds after the backup completed.

Lesson learned: Always maintain an external bootable backup of important machines!

Mac OS X — admin vs wheel group (and how that affected CrashPlan)

Last week my wife’s four-year old iMac died. When the new one arrived, I set it via migration in the form of a USB-connected drive containing a mirror of her old system.

After booting up the migrated machine, I ran into an issue in which the CrashPlan app wouldn’t start, and the menubar app reported “Can’t connect to backup destination”. I tried running the CrashPlan uninstaller, and then doing a fresh install, but unfortunately it didn’t help.

Checking the console, I found messages reporting that the file “.ui_info” couldn’t be found in the directory /Library/Application Support/CrashPlan. Which was strange, since I could clearly see that file existed in a Terminal directory listing.

What I also noticed was that the CrashPlan directory was owned by the “wheel” group, while most of the other directories in Application Support were owned by the group “admin”.

I then tried manually deleting the CrashPlan directory in the Terminal, and running the CrashPlan installer again. This time, the CrashPlan directory was owned by the “admin” group—and, consequently, the CrashPlan app successfully started up.

This experienced prompted a couple of observations:

  1. Even when authenticated by an admin user, the CrashPlan uninstaller was unable to remove its CrashPlan directory in Application Support.
  2. A fresh install of CrashPlan didn’t set the correct group ownership of the CrashPlan folder in Application Support, which led to the app being unable to start.
  3. I have the impression that the “wheel” group may have been deprecated at some point in the OS X evolution, but still getting passed on from machine to machine in migration upgrades. I wonder whether it would be a good idea, or even safe, to do a global change of anything on the computer owned by “wheel”, changing it to “group”?

If you know the answer to the third, please let me know in the comments. Thanks!

How I migrated my snippets from TextExpander to Keyboard Maestro

TextExpander is a Mac utility for creating auto-expanding text shortcuts—“snippets”—that can save you time on things you repetitively type, such as email signatures, your telephone number or boilerplate responses to support emails. With version 6, Smile decided to move away from paid upgrades, to a subscription plan that would cost roughly $5 per month. The move was controversial, a situation which is well documented at Michael Tsai’s blog. I’ve been using TextExpander for 10 years, but decided against continuing with a subscription plan.

Continue reading How I migrated my snippets from TextExpander to Keyboard Maestro

iCloud Photo Sharing

Having an extended family spread geographically far and wide, I’ve been pleasantly surprised to find that a Mac/iOS feature I’d previously rarely used has ended up connecting us far better than any social network, and that is iCloud Photo Sharing. My parents, brother, our kids, their kids, etc. love seeing photos appearing in the streams, and being able to comment on them.

How to disable root login on a DigitalOcean droplet

When you create a droplet (virtual private server) at DigitalOcean, the service sends you an email containing the login password of the root user. The problem with this setup is the risk that your server gets compromised through a brute-force password-guessing login attack.

DigitalOcean provides a more secure alternative, if you first add your SSH public key to your DigitalOcean account settings. In this case, when DigitalOcean creates your droplets, it will disable root login with password, and configure the server so that you can login as root using only your ssh key.

I only learned about this safer option after having created my droplet, and so I spent a little time trying to figure out how to rectify things — i.e. I wanted to add my SSH key to the server, and disable root login with password.

Surprisingly, I had to piece together instructions from a couple of articles, as well as getting some support from our company’s system administrator, and so I thought I’d post a summary here for the benefit of others:

Step 1: Copy your SSH key to the DigitalOcean server. (You do this from your local computer, and this assumes you already have an ssh key locally.)

cat ~/.ssh/id_dsa.pub | ssh [email protected][your_server] "cat >> ~/.ssh/authorized_keys"

Step 2: Edit the file /etc/ssh/sshd_config, setting the PermitRootLogin setting to “without-password”. I used Transmit’s “Edit in Transmit” feature to do this. Also, don’t, as I did, confuse this file with the similarly-named “ssh_config”.

PermitRootLogin without-password

Step 3: Login to the server as root, and restart sshd:

service ssh restart

After sshd restarts, you should be able to login as root without entering a password, and your server should now be a bit more secure.

WordPress Hosting — From DreamHost to DreamPress to GoDaddy to DigitalOcean

This website runs on WordPress, and over the past several years has seen its hosting move from the DreamHost shared environment, to DreamPress managed hosting, to GoDaddy managed hosting to, finally, DigitalOcean. This article explains why.

Continue reading WordPress Hosting — From DreamHost to DreamPress to GoDaddy to DigitalOcean

How to create a kill-switched VPN on Mac OS X with Little Snitch

In this post, I describe why, after years of using the wonderful Mac/iOS VPN product, Cloak, I’m experimenting with an alternative approach, that combines Private Internet Access (PIA) and Little Snitch. (2015-08-28 — As mentioned in an update at the end of the article, I’ve actually now switched back to Cloak, but using Little Snitch as the kill-switch.)

Continue reading How to create a kill-switched VPN on Mac OS X with Little Snitch

Feature request for 1Password — Provide PIN opening on TouchID enabled devices

The passcode to unlock my 1Password keychain is long—very long—and typing that in on an iOS device is time consuming and error-prone.

Fortunately, Agilebits provides two short-cuts:

  1. For iOS devices that support TouchID, you can open 1Password simply through recognition of your fingerprint, in the same way you unlock the device itself.
  2. For iOS devices that do not support TouchID, 1Password allows you to set a four-digit PIN that can be used to unlock 1Password after you’ve initially authenticated once with your passphrase. This option remains secure, in that you only get one chance to enter your PIN; if entered incorrectly, the app again requires full authentication with your passphrase.

Either from having naturally sweaty fingers, or living in a humid, costal environment—or a combination of both—TouchID does not reliability work for me. In fact, it only works about 10% of the time I try to use it. From scan-setup of the same finger multiple times, to complete resets, I’ve tried every recommended approach to improve TouchID—but all to no avail; it simply doesn’t work for me.

As a consequence, while 1Password is usable for me on my iPad mini via the PIN mechanism, it’s awful to use on my TouchID-enabled iPhone 6. Every time I need to open 1Password, I have to type in that very long passphrase.

For that reason, I wish that 1Password would offer the PIN access mechanism on TouchID devices, as an option.

Speaking with the support staff at Agilebits, they’ve communicated that this isn’t possible, because the current implementation is to offer TouchID on supported devices, and fall back to offering PIN access on devices that don’t. But that’s just the way it’s currently implemented; there shouldn’t be any technical reason why 1Password couldn’t offer both options on TouchID devices.

I understand that I’m in the minority, and that for most people, TouchID works just fine. And I know that many product decisions are made considering trade-offs related to the size of affected groups. My hope, however, is that the people at Agilebits can consider that the cost in usability of this particular problem, for those in the minority like myself, is huge, and creates a situation encouraging the use of a shorter, less-safe, passphrase.

And perhaps considered in that light, they’ll add both options to 1Password running on TouchID devices as well.