Forget P@ssw0rds, use phrases that motivate you!

It’s a good practice to change your passwords ever so often. If you’re working at a company with IT staff, they’ll probably force you to change your password every month. This is supposed to increase security, but in reality it decreases as people build a system for themselves to help remember what this month’s password was: J@n*05/2015

Since people change their password but not their own formula, changing password adds absolutely no increased security: even first graders can hack the current mutation given an earlier password.

So here’s the idea: stop using a fixed formula, but use phrases that motivate you!

Every month, pick one small goal for yourself. Just a small thing. Something you want to be better at, something you need to build up some courage to do so, something small you want to change. Then, motivate yourself by using that as a password. Since you’ll be typing it multiple times per day, you’ll motivate yourself to actually go do/change/forget/forgive it. It’s like your little personal life coach, whispering a single small goal you can accomplish this month, in your ear, multiple times per day.

Use this system for a year, and I promise you you’ll be happy you did. Here’s a couple to spark your inspiration:
GoHome@5PM
YouAre110%Sexy
Ask$10Raise
Lose5%WeightBeforeSummer
SendSurprise2Mom&Dad
NoMailsBefore11AM
2Coffees/day=enough
Blog1+/month
Me+Mojo->Run30Minutes
6<hoursOfSleep<9
CheckIn1+/day
EveryEstimate+2
EveryEstimate*4

… Yea, had to learn the hard way on that last one…

Feel free to inspire others too, share your favorite motivational password in the comments below!

Changing the ASP.Net web.config connectionstrings at runtime

So here’s an interesting challenge we had to overcome today: every developer in the team has small variations in his/her workstation setup. Instead of micro-managing each installation (we all know how developers love to be micro-managed and even more so how they love to be told how to set up their workstation), we thought we’d give a stab at changing the connections from the web.config at runtime.

Wait, aren’t there a million posts on how to do that already? Well, yes, but most of them use web.config transformations (which aren’t ran when you locally debug), or actually save the web.config (physically changing the file on disk so that the next developer that gets the default web.config is screwed).

What we wanted to do, is actually load the default web.config, but, per developer, change some of the in-memory values.
Turns out, all you need is a little reflection:

#if CUSTOMCONNECTIONSTRINGS 
        private static void SetConnectionString(string name, string connectionString)
        { 
             typeof(ConfigurationElementCollection)
                .GetField("bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(ConfigurationManager.ConnectionStrings, false);
            var connection = System.Configuration.ConfigurationManager.ConnectionStrings[name];
            typeof(System.Configuration.ConnectionStringSettings).BaseType
                .GetField("_bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(connection, false);
            connection.ConnectionString = connectionString;
        }
#endif

Then, in your global.asax application_start method, before doing anything else:

#if CUSTOMCONNECTIONSTRINGS 
#if BOB
  SetConnectionString("LocalSqlServer", "wow");
  SetConnectionString("DefaultConnection", "much monkey patch");
  SetConnectionString("AndAnother", "very connectionstring");  
#endif   
#endif

Finally, each developer goes to the configuration manager (the dropdown next to ‘Debug’), creates a new configuration based on ‘debug’, then in the project properties > Build > adds the conditional compilation symbols BOB, CUSTOMCONNECTIONSTRINGS.
Now each developer can run his/her own configuration and manage how they’ve set up their own system, the code that does the monkey patching of the connection strings is not even included in the release output, and the actual web.config file is never modified and will always contain the default values.

Github: the social coding experience

A couple of months ago, I joined an open source project called aurelia as a core team member. Like many open source projects, the project uses github for its source control

I’ve been a Microsoft stack lover all my tech life, thus until recently my only visits to github were when someone (for reasons unknown to me at that time) hosted a sample on github. My only experience with git was clicking on the ‘download as zip’ button in github to grab that sample.
Microsoft really never trained me to think otherwise.
I’m a big believer of ‘sharing is caring’ though, that’s why a lot of my blog posts come with inline code samples, samples on MSDN or an extension on codeplex. Imagine where we, the LightSwitch community, would be now if LightSwitch had its source openly available from the start.

About 5-6 weeks ago, I started working on the aurelia validation plugin. It was an eye-opener, to say the least. After only a week of building out some core components, another aurelia team member created a pull request (pull requests are like a request to merge a provided changeset) to implement translations for the validation messages. Great, I thought at the time, a team working together on a project.
Yet, it was more than that. That same week, someone outside of the team submitted a pull request to turn the repository into a JSPM package so it can be easily installed using the JSPM package manager. Soon after, someone fixed some small typos in the documentation. More ‘language packs’ in Mexican, Swedish, Turkish and other languages arrived that week. Some bugs were reported as issues, with clear code sample instructions on how to reproduce it and sometimes even a code to fix the issue, and another issue was opened simply to discuss an integration strategy with another open source validation plugin.
Someone even wrote additional unit tests.
Unit tests!!!
Someone willingly sacrificed personal time to write… unit tests…

I slowly grew to realize the amazing truth: open source projects are not just projects where the source is publicly visible. Github isn’t just a source control website. The open source community, github in particular, are also, and perhaps most importantly, about the social coding experience. Working together with a variety of people to accomplish common goals, to share the creation of something awesome, to share and intensify the joy of our common passion.

Earlier this year, I was talking to some Microsoft folks and they were so excited about their recent announcement that Microsoft server stack is going completely open source.
I didn’t get the big deal at that point.
I use the technology already, and if there’s something I’d like to do different, there usually is support to configure my will or I reverse engineer the sources to see if I can monkey patch it, and carry on with my task at hand.
Yet now I understand: open source is not about having their source in the open, it’s about having an open invitation to join their coding experience.

Let’s hope that everyone and every team at Microsoft truly get that too. Let’s hope that their next products or versions of existing products, embrace the same love for the social coding experience. Lets hope that Microsoft can teach their somewhat traditional B2B LOB introvert application developer flock to embrace the social coding experience too.

Because, after all, what a beautiful experience it turns out to be.

Supporting OData $inlinecount & json verbose with Web API OData

OData, the open data protocol is an awesome protocol for exposing data from your server tier, because it allows the caller to use special query arguments to filter, sort, select only particular columns, request related entities in a single call, and do paging.

Basically, this means you end up with an “open” data service API, where you literally just expose data and leave it up to the client to dictate the specific use case. Whether you want to do that is kinda negotiable for your own client, but when you’re building an application where you want to really want to support and nurture users building 3rd party integration tools, OData is the perfect candidate to build an “I don’t know beforehand what scenarios you want to accomplish” API.

Furthermore, creating an OData read service is really simple, you take the Microsoft.AspNet.WebApi.OData nuget package, you expose an IQueryable in your Controller, and you slap on the [EnableQueryAttribute]:

    public class QueryController : ApiController
    {

        [EnableQuery]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

So here’s the problem: suppose there’s 100 people in the database, with ages evenly divided from 1 – 100. The caller requests all people with age > 50 ($filter=age gt 50). We also applied a page-size (which you should really always do to avoid self-inflicted DDOS attacks) of 25 maximum records in a single response. At this point, we do not want to just send back 25 records, but we also want to inform the caller that we have a applied a page-size and there are really 50 people that match his search criteria, and wouldn’t it be nice if we can also inform the caller how to get the next page?

The good news is: according to the OData spec, you can! By returning an “OData verbose” response (“verbose” being the opposite of “light”, which is the new OData default response), you can send back a result not only containing the actual results but additional metadata like the number of people that matched your search criteria, and how to get the next page of results.

The really bad news is: the Web API OData implementation does not support the $inlinecount query parameter (which instructs the server to send back the count after filtering but before paging). OUCH!

Weirdly, after following a dozen blog posts (like this really good one ) I stumbled upon the fact that this is only partly true… The Web API Odata implementation does in fact support the $inlinecount query parameter, however it does not in any way support actually sending back the JSON verbose format where the caller actually gets to see the query parameter…
Wait, whot?
A caller can send the $inlinecount, the EnableQueryAtrribute (which really does all the heavy work) will correctly handle it, but instead of properly sending the count to the client it will simply keep it in memory and send only the results back. Same story with the link to the next page of records, when you implement a PageSize.
So the good news is: to re-enable the $inlinecount, or in other words: send back a more verbose response to the user, you can make your own EnableQueryAttribute:

using Newtonsoft.Json;
using System;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Filters;
using System.Web.Http.OData;
using System.Web.Http.OData.Extensions;
using System.Web.Http.OData.Query;

namespace Lobsta.webapi
{
    internal class ODataVerbose
    {
        public IQueryable Results { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public long? __count { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public string __next { get; set; }
    }
    public class QueryableAttribute : EnableQueryAttribute
    {
        public bool ForceInlineCount { get; private set; } 
        public QueryableAttribute(bool forceInlineCount = true, int PageSize = 25)
        {
            this.ForceInlineCount = forceInlineCount;
            //Enables server paging by default
            if (this.PageSize == 0)
            {
                this.PageSize = PageSize;
            }
        }
        public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
        {
            //Enables inlinecount by default if forced to do so by adding to query string
            if (this.ForceInlineCount && !actionExecutedContext.Request.GetQueryNameValuePairs().Any(c => c.Key == "inlinecount"))
            {
                var requestUri = actionExecutedContext.Request.RequestUri.ToString();
                if (string.IsNullOrEmpty(actionExecutedContext.Request.RequestUri.Query))
                    requestUri += "?$inlinecount=allpages";
                else
                    requestUri += "&$inlinecount=allpages";
                actionExecutedContext.Request.RequestUri = new Uri(requestUri); 
            }

            //Let OData implementation handle everything
            base.OnActionExecuted(actionExecutedContext);

            //Examine if we want to return fat result instead of default
            var odataOptions = actionExecutedContext.Request.ODataProperties();  //This is the secret sauce, really.
            object responseObject;
            if (
                ResponseIsValid(actionExecutedContext.Response) 
                && actionExecutedContext.Response.TryGetContentValue(out responseObject)
                && responseObject is IQueryable)
            {
                actionExecutedContext.Response =
                    actionExecutedContext.Request.CreateResponse(
                        HttpStatusCode.OK,
                        new ODataVerbose
                        {
                            Results = (IQueryable)responseObject,
                            __count = odataOptions.TotalCount,
                            __next = (odataOptions.NextLink == null) ? null : odataOptions.NextLink.PathAndQuery
                        }
                    );
            }
        }

        private bool ResponseIsValid(HttpResponseMessage response)
        {
            return (response != null && response.StatusCode == HttpStatusCode.OK && (response.Content is ObjectContent));
        }
    }
}

Note: this is highly opinionated sample code, it always uses a page size of 25, and always returns the inlinecount… Change to your liking, by example checking if the requested format is jsonverbose, to be OData spec compliant
Finally, replace the ‘EnableQuery’ attribute with our custom one:

    public class QueryController : ApiController
    {

        [Queryable]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

Putting it to the test, I called: /api/query/people?$orderby=name&$filter=age gt 50&$inlinecount=allpages again and now correctly receive my requested metadata:

{
 "Results":[
   {"age":51,"name":"Anna"} /* More results were included of course */
 ],
 "__count":50,
 "__next":"/api/query/people?$orderby=name&$filter=age%20gt%2050$inlinecount=allpages&$skip=25"
}

Coding tip #2: fail fast, don’t fail, don’t worry

Facebook recently pulled the plug on one of their data centers…
On purpose.

The idea was to investigate how well they could recover from live failures.

We developers, and beginning developers especially, sometimes have this weird notion that code should be perfect and withstand any storm. Truth is, something can and will always go wrong at some point in time, and we should stop fearing it.

The first, most noticeable form of something going wrong, is an exception that’s being thrown. Beginning developers will often shy exceptions. They’re cryptic and are more likely to happen in the middle of a demo than while developing…

Fail fast

Hence, out of fear of introducing new exceptions by actually throwing one, beginning developers start writing code as:

public object getValue(string key){
  if(key == "CurrentUser")
    return SomeContext.User.Name;
  if(key == "CurrentTeam")
    return SomeContext.Team.Name;
  return "Not found";
}

Peaceful, right? No matter what value you ask for, no exception shall ever leave this method.

The unfortunate thing here if the calling logic is flawed somewhere, you might only find out much, much later in the process.
The above piece of code is called by some EmailTaskPreparer, which retrieves “current_user” to create an instance of a task. That task is put on a queue, one hour later a worker process picks it up and processes it by getting the current user’s email address, then sending an email.
One day later, you get a bug report there are undeliverable emails hanging around the system and you get to embark on the pleasant adventure of backtracking every possible piece of code that is sending emails, putting email-tasks on queues, and how they build those tasks.

The key lesson is: fail fast. If something is wrong, throw an exception on the spot instead of returning a peaceful default value.
The calling logic will still be just as flawed, but at least now you end up with a bug report stating that an InvalidKeyArgument was thrown when the EmailTaskPreparer called ‘getValue’, which will be easy to find, fix and will give you more time to actually get some real work done.

Don’t fail

Obviously, learning to code is all about understanding to take everything in moderation. The next rule of fist is to understand that exceptions are for ‘exceptional situations’ only.
When you have an abundance of exceptions being thrown all over your code, you’ll soon end up with a lot of try-catch blocks, and eventually you’ll end up with a code base that has two new problems:
– the code becomes less readable (at the bottom of your try block are a bunch of alternative code paths that make your logic harder to follow)
– the code becomes slower (the compiler can do less optimizations because it needs to make sure it can handle your expected exceptional code pathing)

To address the first, consider adding logic to your classes that can pre-approve an operation. This is why an iterator has a ‘hasNext()’ function, a command has a ‘canExecute()’ function: you can ask if you should expect something to go wrong, and decide on how to handle that on the spot, instead of 100’s of lines lower in a catch=block. It’ll make your code much more readable. Don’t fail if you could have avoided it.

Don’t worry

Finally, there are very little use cases to actually catch an exception. If you take the two previous rules in mind, exceptions will only occur when something really unexpected happens. In literature, exceptions are considered ‘final’ (the code below where you throw an exception will not execute) because they signify the system has entered a state from which recovery is not expected to be possible and execution should not continue.
Hence, if an exception occurs that you could not possibly have avoided and there’s no way you can recover from it, why bother catching it?
Don’t worry. Really, you should only really catch exceptions in a very limited couple of cases:
– you could not avoid it (no ‘hasNext’, ‘canExecute’, etc) but you still know how to recover from it For example: reschedule the task for later execution
– you want to hide exception details: a general catch-all block that catches any exception, logs it, and throws a new exception that hides any internals specific to the current layer of your application. For example you can should the SQL exception (“Connection failed for user “Bob” with password “Bob123″), only to throw a new generic DatabaseOperationFailedException.*
Beyond the above use cases and a couple of others, catching and handling exceptions should not be a part of the majority of your code base.

Don’t worry, all systems will fail at one point or the other, just try to make sure that when it fails, you’ll have a precise stacktrace and clean codebase to help you trace their cause (or, that you know how to plug a data center back in).

Understanding aurelia.io development prerequisites from a .NET perspective (NPM, JSPM, GULP, KARMA, WTH)

There is this awesome javascript framework called aurelia.io that I’ve been working on and using for a little while now. At a certain point, I feel you’ll owe it to yourself (*if not then at least you owe it to me *) to download one of the sample apps and play around a bit.

Some time ago, when I was trying to do just that, I felt overwhelmed at first. I had limited experience with javascript, HTML and CSS. Aurelia itself looked (is) really easy to grasp, especially since it has a very C# + XAMLesque feel to it. The learning curve still felt steep though, but only because of the surrounding open source stack: NPM, JSPM, GULP, KARMA and the likes. I was like WTH are all of these, and I wish I took 3 minutes to read an absolute beginners blog post like this one to bring my vocabulary up to speed.

Your system

There are some things you’ll need to setup on your system before making the switch to this (or any) open source project…

Git

Git is a free and distributed version control system. In .NET speak: yes it is exactly the same as TFS but it’s totally different. Whenever you want to pull or clone a project (‘check out’) from a server (like github, you’ll need git so your OS understands what the hell pull or clone even means.
Additionally, you’ll often need to execute a command (more about that later). The normal cmd.exe or hipster mac OS alternative will at one point or the other drop the ball in understanding those commands, whereas installing git gives you a ‘git bash’ which looks and acts completely the same as your command prompt but understands how everything in this OSS stack fits together, better.

NodeJS

NodeJS is a platform based on Chrome’s javascript runtime. You could compare this to installing the .NET runtime on your system. To run your aurelia apps, users do not need to install NodeJS first, because your app will run in the browser and use that stack. However, when you are developing apps, you do need a runtime for your tooling to execute in. And that runtime, will be NodeJS.

IDE

You’ll also need an IDE. Your IDE should be an integrated bundle that can do three things: manage your project, execute commands and help you write code.

Unfortunately, Visual Studio ultimate super duper bundle, fails the third requirement as it does not (yet) support ES6 syntax. I know! What a shocker right!

My first alternative was going back to a simple text editor. According to stackoverflow’s 2015 survey, SublimeText is the second favorite one. Once set up with an ES6 plugin for next generation javascript syntax highlighting and intellisense and an integrated terminal/command prompt I realized going back to a text editor was the exact opposite of taking a step back. It’s really, really, really fast. Like: really fast!

A little later, we received a free OSS license for JetBrain’s WebStorm IDE. One of the biggest, noteworthy advantages for me was the integrated (visual) support for git&github. After using it for about 2 months now, it has my love, my official seal of approval, and I doubt I’ll go back to VS for javascript development…

Package managers

Managers… Plural.

NPM

NPM is a javascript package manager. In aurelia, we use NPM because of it’s ability not only to install packages in your project, but also in your tooling. Think of it as Visual Studio’s “plugins”, only you install the plugins in your build process/tooling instead of your IDE.

JSPM

JSPM is also a javascript package manager. In aurelia, we use JSPM because of it’s ability to manage dependencies not only in your project but also in your app while it’s running. Suppose your project uses package A and B. Both A and B have a dependency on package C, but they depend on a different major version: A uses C1 and B uses C2. Whoa, now what? Actually, JSPM will download both version 1 and 2 of C, then when you run your project it will load both C1 and C2 in a different ‘scope’. If some component in A needs something of C, it’ll get the C1 version, whereas a component in B will get the C2 version. WHOA! Think of it as a (really) advanced Nuget.

NPM Libraries

Remember: NPM is used to install tooling, so these libraries are your development tools.

Gulp

Using NPM, you can install Gulp. Gulp runs automated development tasks. gulp watch for example, is the equivalent of your Visual Studio’s F5. It will run several automation tasks like cleaning the code, building it, starting a NodeJS server (like VS starts an IISExpress) and serve up your website (the actual watch part). While you are developing, you can keep the gulp watch command running and any change you do in your code will trigger the complete clean, build & run task chain (in about 0.3 secs) so that your website is automatically refreshed and you can see your changes.

Wait: build? Is javascript built? Javascript itself is not built but it’s a scripting language that’s executed by the browser’s javascript runtime. However, you’ll often use a superset of Javascript (like TypeScript, Dart, … The aurelia team just code in ES6/ES7 (the next and next-next javascript spec)). Because your browser does not understand those, your TS/Dart/ES6 is compiled down (called ‘transpiled’) to plain vanilla javascript that works in any browser.

We use gulp for a number of tasks including building documentation, releasing, etc as well.

Karma

Karma is your test runner. It performs a number of tasks to build the code (like gulp), scan a directory for all test cases and run them lightning fast. Just like gulp watch, you can trigger your test runner with karma start and keep the command running. Any code change will trigger a rerun of affected tests (again: wow) so you can instantly check what you’re breaking/fixing. (More breaking than fixing in my case, but whatev)

Time to play

Armed with this knowledge, I hope you find it easier to go play with an aurelia sample. Set up git and NodeJS from the System chapter, then launch a git bash. Navigate to a working folder:

cd .. #go up one directory, or
cd someFolderName # go into a directory

Now, execute

git clone https://github.com/aurelia/app-contacts.git

This will clone (download) the contents of the a sample app into a folder called app-contacts. When done, navigate to that folder

cd app-contacts

Armed with the knowledge in this article, you should now be able to follow this little guide, and have the app running locally without screaming WTH once. ;-)

John Holmes and the case of the mysteriously missing LightSwitch record

Chapter one: the night of disappearance

The cold wind was howling in the streets when Holmes knocked on Elvis’ door.  “It just doesn’t make sense”, was the first thing Elvis said, “it’s simply… gone”…

Aight, pause please. I’m enjoying high 80s (Fahrenheit) at the moment here in the Caribbean, there was no door (just skype) and Elvis is not the actual name of the developer I was helping. I’ll try fiction again in a couple of years, let’s just get through the story of this missing LightSwitch record I encountered last week because I’m pretty sure one day, someone is going to stumble upon a similar issue and this blog post will save him/her half an hour of being dumbfounded…

Chapter one: (for real this time): a record went missing

So here’s what really happened: Elvis called me up because the LightSwitch desktop application that he was working on was throwing an unexplainable nullpointer exception. Unexplainable, because his flow was really basic, textbook easy even:

a) create a record from a “create new entity” screen (the entity was called a “case”)

b) when the screen has successfully saved the new record, use code to close the “create new entity” screen and open the default detail screen for that case enity

c) a nullpointer exception happens in the _initializeDataWorkspace method on this.Case

I’d never seen this before. Right after successfully saving, we show a detail screen on the same entity and the entity just disappears. I’m pretty sure at that point I spent at least 10 minutes of mumbling “I’m dumbfounded“. We started debugging and piece by piece we were able to reassemble the complete puzzle…

See, each screen in a LightSwitch desktop application has its own DataWorkspace.  This is LightSwitch’ implementation of the unit-of-work pattern, whatever changes happen in one screen get committed as a whole upon saving, and either all changes are persisted to the data source or, in the case of an exception, none.

Since each screen has it’s own DataWorkspace, a single entity can exist multiple times (multiple instances) in different DataWorkspaces.  This also means that, unlike the LS HTML client, when you type code like

this.Application.ShowDefaultScreen(this.Case);

the framework will actually take this.Case, an entity which belongs to the DataWorkspace of the “Create new entity” screen,  and pass only the ID field(s) to the new “Detail” screen.  The new screen will then actually use the ID(s) to retrieve the entity again, this time in its own DataWorkspace.

Chapter 2: the resource segment cannot be found

Sure enough, Fiddler soon revealed that when the second screen tried to retrieve the entity again, something went wrong.

Really wrong.

The exception returned was something along the lines of “the resource segment cannot be found for ‘http://localhost:xxxx/ApplicationData.svc/Case'&#8221;

Another ten minutes of mumbling about how dumbfounded I was, went by, before we got an Epiphany…

LightSwitch uses the OData protocol to interact with the server tier. OData uses http verbs (GET, POST, etc) for the ‘action’ (***), and a URI to state which entity you want to do your action on. URI is the keyword here: unique resource identifier. Or translated into normal language: your call to “blah blah blah.ApplicationData.svc/case/123″ means you want me to retrieve the ‘Case’ entities and return the slice identified by “123”.

Get all resources for “Case” and only return slice (fragment) 123.

So basically this was the “OData way” of saying: case 123 can’t be found.

Thanks for being straightforward and all…

But… we just created it. And according to our select * from Case, the record is in the database, then why could LightSwitch not find it?

Chapter 3: if it looks like a duck

The mystery continued with 10 more minutes of mumbling about dumbfoundedness… But then we remembered that there’s actually a filter active.

LightSwitch supports row-level filtering, ideal for slicing your records in multi-tenant or advanced security/permission situations.

And sure enough, this was the case

if(!Application.Current.User.HasPermission(Permissions.FamilyLaw)
  query = query.Where( case => case.AssignedCategory.Name != "Family");

The user we were testing with did not have the FamilyLaw permission any more, effectively causing all cases to be filtered to only show the ones where the assigned category is not “Family”.

Right?

Right?

Turns out, our problem was even a little more subtle…

This LINQ query filters out all the cases that have an AssignedCategory that is not “Family”.

Read that again… The linq query is actually, unintentionally I assure you, adding two filter criteria behind the scenes:

  case.AssignedCategory != "Family"
  seems to be interpreted as...
  case.AssignedCategory != null && case.AssignedCategory.Name != "Family"

Really, LINQ? Really?

Once we rewrote the LINQ query (to allow cases with AssignedCategory == null), our freshly created case entity (which has not yet been assigned to any category) now found its way down to the details screen.

 

*** PS: LightSwitch doesn’t really use HTTP verbs like (PUT, PATCH, …) directly on the OData service.  Because each screen has it’s own “unit of work”, all changes get posted just once, together, to an URI called $batch.  This HTTP “batch” call actually contains all different changes that happened in a screen (delete this, modify that, etc) in a single HTTP call.  Using transactional techniques, all changes then all get persisted in the data source together or, in the case something goes wrong, none of them get persisted.