Happy 4th anniversary!

Today I am meeting with a customer and he asked me how much LightSwitch experience I have. So I opened my blog’s dashboard, sorted the posts by ascending date, and there it was: my first post ever… Debugging your custom LightSwitch (Shell) Extension, dated July 27th, 2011. What a crazy coincidence, today, to the date, it’s been exactly 4 years since I transmitted my first words into the blog-o-sphere. Happy 4th anniversary guys ‘n guls!

How to make Visual Studio always run as an administrator

There’s a gazillion reasons why you might want to run visual studio as an administrator.
Mine was when installing a Nuget package that tried to run a PowerShell script on install, and complained about the execution policy not being set correctly.
Normally, Nuget automatically sets the PowerShell execution policy for the Visual Studio process itself so that it can run any script. Which might be dangerous, but hey if we were shy of danger we’d be doing something boring like being racepilots and not something adventurous like being developers.What developer would run the same track twice, manually! The dullness…

Anyways, a process cannot set it’s own PowerShell execution policy if it’s not running as an administrator.
Long story short, not running as administrator === not automatically being able to run PowerShell install scripts.

Luckily, I was doing a coding session with a very bright partner of mine, and she told me this trick:
– press Windows+E and go to C:\Program Files(x86)\Microsoft Visual Studio XX.X\Common\IDE
– right click and hit: troubleshoot compatilibity
– let it detect issues for a bit…
– troubleshoot program
– click: “the program requires additional permissions”
– yes, save these settings

From now on, VS will always and forever run as an administrator!

Ka-tching!

Forget P@ssw0rds, use phrases that motivate you!

It’s a good practice to change your passwords ever so often. If you’re working at a company with IT staff, they’ll probably force you to change your password every month. This is supposed to increase security, but in reality it decreases as people build a system for themselves to help remember what this month’s password was: J@n*05/2015

Since people change their password but not their own formula, changing password adds absolutely no increased security: even first graders can hack the current mutation given an earlier password.

So here’s the idea: stop using a fixed formula, but use phrases that motivate you!

Every month, pick one small goal for yourself. Just a small thing. Something you want to be better at, something you need to build up some courage to do so, something small you want to change. Then, motivate yourself by using that as a password. Since you’ll be typing it multiple times per day, you’ll motivate yourself to actually go do/change/forget/forgive it. It’s like your little personal life coach, whispering a single small goal you can accomplish this month, in your ear, multiple times per day.

Use this system for a year, and I promise you you’ll be happy you did. Here’s a couple to spark your inspiration:
GoHome@5PM
YouAre110%Sexy
Ask$10Raise
Lose5%WeightBeforeSummer
SendSurprise2Mom&Dad
NoMailsBefore11AM
2Coffees/day=enough
Blog1+/month
Me+Mojo->Run30Minutes
6<hoursOfSleep<9
CheckIn1+/day
EveryEstimate+2
EveryEstimate*4

… Yea, had to learn the hard way on that last one…

Feel free to inspire others too, share your favorite motivational password in the comments below!

Changing the ASP.Net web.config connectionstrings at runtime

So here’s an interesting challenge we had to overcome today: every developer in the team has small variations in his/her workstation setup. Instead of micro-managing each installation (we all know how developers love to be micro-managed and even more so how they love to be told how to set up their workstation), we thought we’d give a stab at changing the connections from the web.config at runtime.

Wait, aren’t there a million posts on how to do that already? Well, yes, but most of them use web.config transformations (which aren’t ran when you locally debug), or actually save the web.config (physically changing the file on disk so that the next developer that gets the default web.config is screwed).

What we wanted to do, is actually load the default web.config, but, per developer, change some of the in-memory values.
Turns out, all you need is a little reflection:

#if CUSTOMCONNECTIONSTRINGS 
        private static void SetConnectionString(string name, string connectionString)
        { 
             typeof(ConfigurationElementCollection)
                .GetField("bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(ConfigurationManager.ConnectionStrings, false);
            var connection = System.Configuration.ConfigurationManager.ConnectionStrings[name];
            typeof(System.Configuration.ConnectionStringSettings).BaseType
                .GetField("_bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(connection, false);
            connection.ConnectionString = connectionString;
        }
#endif

Then, in your global.asax application_start method, before doing anything else:

#if CUSTOMCONNECTIONSTRINGS 
#if BOB
  SetConnectionString("LocalSqlServer", "wow");
  SetConnectionString("DefaultConnection", "much monkey patch");
  SetConnectionString("AndAnother", "very connectionstring");  
#endif   
#endif

Finally, each developer goes to the configuration manager (the dropdown next to ‘Debug’), creates a new configuration based on ‘debug’, then in the project properties > Build > adds the conditional compilation symbols BOB, CUSTOMCONNECTIONSTRINGS.
Now each developer can run his/her own configuration and manage how they’ve set up their own system, the code that does the monkey patching of the connection strings is not even included in the release output, and the actual web.config file is never modified and will always contain the default values.

Github: the social coding experience

A couple of months ago, I joined an open source project called aurelia as a core team member. Like many open source projects, the project uses github for its source control

I’ve been a Microsoft stack lover all my tech life, thus until recently my only visits to github were when someone (for reasons unknown to me at that time) hosted a sample on github. My only experience with git was clicking on the ‘download as zip’ button in github to grab that sample.
Microsoft really never trained me to think otherwise.
I’m a big believer of ‘sharing is caring’ though, that’s why a lot of my blog posts come with inline code samples, samples on MSDN or an extension on codeplex. Imagine where we, the LightSwitch community, would be now if LightSwitch had its source openly available from the start.

About 5-6 weeks ago, I started working on the aurelia validation plugin. It was an eye-opener, to say the least. After only a week of building out some core components, another aurelia team member created a pull request (pull requests are like a request to merge a provided changeset) to implement translations for the validation messages. Great, I thought at the time, a team working together on a project.
Yet, it was more than that. That same week, someone outside of the team submitted a pull request to turn the repository into a JSPM package so it can be easily installed using the JSPM package manager. Soon after, someone fixed some small typos in the documentation. More ‘language packs’ in Mexican, Swedish, Turkish and other languages arrived that week. Some bugs were reported as issues, with clear code sample instructions on how to reproduce it and sometimes even a code to fix the issue, and another issue was opened simply to discuss an integration strategy with another open source validation plugin.
Someone even wrote additional unit tests.
Unit tests!!!
Someone willingly sacrificed personal time to write… unit tests…

I slowly grew to realize the amazing truth: open source projects are not just projects where the source is publicly visible. Github isn’t just a source control website. The open source community, github in particular, are also, and perhaps most importantly, about the social coding experience. Working together with a variety of people to accomplish common goals, to share the creation of something awesome, to share and intensify the joy of our common passion.

Earlier this year, I was talking to some Microsoft folks and they were so excited about their recent announcement that Microsoft server stack is going completely open source.
I didn’t get the big deal at that point.
I use the technology already, and if there’s something I’d like to do different, there usually is support to configure my will or I reverse engineer the sources to see if I can monkey patch it, and carry on with my task at hand.
Yet now I understand: open source is not about having their source in the open, it’s about having an open invitation to join their coding experience.

Let’s hope that everyone and every team at Microsoft truly get that too. Let’s hope that their next products or versions of existing products, embrace the same love for the social coding experience. Lets hope that Microsoft can teach their somewhat traditional B2B LOB introvert application developer flock to embrace the social coding experience too.

Because, after all, what a beautiful experience it turns out to be.

Supporting OData $inlinecount & json verbose with Web API OData

OData, the open data protocol is an awesome protocol for exposing data from your server tier, because it allows the caller to use special query arguments to filter, sort, select only particular columns, request related entities in a single call, and do paging.

Basically, this means you end up with an “open” data service API, where you literally just expose data and leave it up to the client to dictate the specific use case. Whether you want to do that is kinda negotiable for your own client, but when you’re building an application where you want to really want to support and nurture users building 3rd party integration tools, OData is the perfect candidate to build an “I don’t know beforehand what scenarios you want to accomplish” API.

Furthermore, creating an OData read service is really simple, you take the Microsoft.AspNet.WebApi.OData nuget package, you expose an IQueryable in your Controller, and you slap on the [EnableQueryAttribute]:

    public class QueryController : ApiController
    {

        [EnableQuery]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

So here’s the problem: suppose there’s 100 people in the database, with ages evenly divided from 1 – 100. The caller requests all people with age > 50 ($filter=age gt 50). We also applied a page-size (which you should really always do to avoid self-inflicted DDOS attacks) of 25 maximum records in a single response. At this point, we do not want to just send back 25 records, but we also want to inform the caller that we have a applied a page-size and there are really 50 people that match his search criteria, and wouldn’t it be nice if we can also inform the caller how to get the next page?

The good news is: according to the OData spec, you can! By returning an “OData verbose” response (“verbose” being the opposite of “light”, which is the new OData default response), you can send back a result not only containing the actual results but additional metadata like the number of people that matched your search criteria, and how to get the next page of results.

The really bad news is: the Web API OData implementation does not support the $inlinecount query parameter (which instructs the server to send back the count after filtering but before paging). OUCH!

Weirdly, after following a dozen blog posts (like this really good one ) I stumbled upon the fact that this is only partly true… The Web API Odata implementation does in fact support the $inlinecount query parameter, however it does not in any way support actually sending back the JSON verbose format where the caller actually gets to see the query parameter…
Wait, whot?
A caller can send the $inlinecount, the EnableQueryAtrribute (which really does all the heavy work) will correctly handle it, but instead of properly sending the count to the client it will simply keep it in memory and send only the results back. Same story with the link to the next page of records, when you implement a PageSize.
So the good news is: to re-enable the $inlinecount, or in other words: send back a more verbose response to the user, you can make your own EnableQueryAttribute:

using Newtonsoft.Json;
using System;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Filters;
using System.Web.Http.OData;
using System.Web.Http.OData.Extensions;
using System.Web.Http.OData.Query;

namespace Lobsta.webapi
{
    internal class ODataVerbose
    {
        public IQueryable Results { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public long? __count { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public string __next { get; set; }
    }
    public class QueryableAttribute : EnableQueryAttribute
    {
        public bool ForceInlineCount { get; private set; } 
        public QueryableAttribute(bool forceInlineCount = true, int PageSize = 25)
        {
            this.ForceInlineCount = forceInlineCount;
            //Enables server paging by default
            if (this.PageSize == 0)
            {
                this.PageSize = PageSize;
            }
        }
        public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
        {
            //Enables inlinecount by default if forced to do so by adding to query string
            if (this.ForceInlineCount && !actionExecutedContext.Request.GetQueryNameValuePairs().Any(c => c.Key == "inlinecount"))
            {
                var requestUri = actionExecutedContext.Request.RequestUri.ToString();
                if (string.IsNullOrEmpty(actionExecutedContext.Request.RequestUri.Query))
                    requestUri += "?$inlinecount=allpages";
                else
                    requestUri += "&$inlinecount=allpages";
                actionExecutedContext.Request.RequestUri = new Uri(requestUri); 
            }

            //Let OData implementation handle everything
            base.OnActionExecuted(actionExecutedContext);

            //Examine if we want to return fat result instead of default
            var odataOptions = actionExecutedContext.Request.ODataProperties();  //This is the secret sauce, really.
            object responseObject;
            if (
                ResponseIsValid(actionExecutedContext.Response) 
                && actionExecutedContext.Response.TryGetContentValue(out responseObject)
                && responseObject is IQueryable)
            {
                actionExecutedContext.Response =
                    actionExecutedContext.Request.CreateResponse(
                        HttpStatusCode.OK,
                        new ODataVerbose
                        {
                            Results = (IQueryable)responseObject,
                            __count = odataOptions.TotalCount,
                            __next = (odataOptions.NextLink == null) ? null : odataOptions.NextLink.PathAndQuery
                        }
                    );
            }
        }

        private bool ResponseIsValid(HttpResponseMessage response)
        {
            return (response != null && response.StatusCode == HttpStatusCode.OK && (response.Content is ObjectContent));
        }
    }
}

Note: this is highly opinionated sample code, it always uses a page size of 25, and always returns the inlinecount… Change to your liking, by example checking if the requested format is jsonverbose, to be OData spec compliant
Finally, replace the ‘EnableQuery’ attribute with our custom one:

    public class QueryController : ApiController
    {

        [Queryable]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

Putting it to the test, I called: /api/query/people?$orderby=name&$filter=age gt 50&$inlinecount=allpages again and now correctly receive my requested metadata:

{
 "Results":[
   {"age":51,"name":"Anna"} /* More results were included of course */
 ],
 "__count":50,
 "__next":"/api/query/people?$orderby=name&$filter=age%20gt%2050$inlinecount=allpages&$skip=25"
}

Coding tip #2: fail fast, don’t fail, don’t worry

Facebook recently pulled the plug on one of their data centers…
On purpose.

The idea was to investigate how well they could recover from live failures.

We developers, and beginning developers especially, sometimes have this weird notion that code should be perfect and withstand any storm. Truth is, something can and will always go wrong at some point in time, and we should stop fearing it.

The first, most noticeable form of something going wrong, is an exception that’s being thrown. Beginning developers will often shy exceptions. They’re cryptic and are more likely to happen in the middle of a demo than while developing…

Fail fast

Hence, out of fear of introducing new exceptions by actually throwing one, beginning developers start writing code as:

public object getValue(string key){
  if(key == "CurrentUser")
    return SomeContext.User.Name;
  if(key == "CurrentTeam")
    return SomeContext.Team.Name;
  return "Not found";
}

Peaceful, right? No matter what value you ask for, no exception shall ever leave this method.

The unfortunate thing here if the calling logic is flawed somewhere, you might only find out much, much later in the process.
The above piece of code is called by some EmailTaskPreparer, which retrieves “current_user” to create an instance of a task. That task is put on a queue, one hour later a worker process picks it up and processes it by getting the current user’s email address, then sending an email.
One day later, you get a bug report there are undeliverable emails hanging around the system and you get to embark on the pleasant adventure of backtracking every possible piece of code that is sending emails, putting email-tasks on queues, and how they build those tasks.

The key lesson is: fail fast. If something is wrong, throw an exception on the spot instead of returning a peaceful default value.
The calling logic will still be just as flawed, but at least now you end up with a bug report stating that an InvalidKeyArgument was thrown when the EmailTaskPreparer called ‘getValue’, which will be easy to find, fix and will give you more time to actually get some real work done.

Don’t fail

Obviously, learning to code is all about understanding to take everything in moderation. The next rule of fist is to understand that exceptions are for ‘exceptional situations’ only.
When you have an abundance of exceptions being thrown all over your code, you’ll soon end up with a lot of try-catch blocks, and eventually you’ll end up with a code base that has two new problems:
– the code becomes less readable (at the bottom of your try block are a bunch of alternative code paths that make your logic harder to follow)
– the code becomes slower (the compiler can do less optimizations because it needs to make sure it can handle your expected exceptional code pathing)

To address the first, consider adding logic to your classes that can pre-approve an operation. This is why an iterator has a ‘hasNext()’ function, a command has a ‘canExecute()’ function: you can ask if you should expect something to go wrong, and decide on how to handle that on the spot, instead of 100’s of lines lower in a catch=block. It’ll make your code much more readable. Don’t fail if you could have avoided it.

Don’t worry

Finally, there are very little use cases to actually catch an exception. If you take the two previous rules in mind, exceptions will only occur when something really unexpected happens. In literature, exceptions are considered ‘final’ (the code below where you throw an exception will not execute) because they signify the system has entered a state from which recovery is not expected to be possible and execution should not continue.
Hence, if an exception occurs that you could not possibly have avoided and there’s no way you can recover from it, why bother catching it?
Don’t worry. Really, you should only really catch exceptions in a very limited couple of cases:
– you could not avoid it (no ‘hasNext’, ‘canExecute’, etc) but you still know how to recover from it For example: reschedule the task for later execution
– you want to hide exception details: a general catch-all block that catches any exception, logs it, and throws a new exception that hides any internals specific to the current layer of your application. For example you can should the SQL exception (“Connection failed for user “Bob” with password “Bob123″), only to throw a new generic DatabaseOperationFailedException.*
Beyond the above use cases and a couple of others, catching and handling exceptions should not be a part of the majority of your code base.

Don’t worry, all systems will fail at one point or the other, just try to make sure that when it fails, you’ll have a precise stacktrace and clean codebase to help you trace their cause (or, that you know how to plug a data center back in).