Dev Up Conference 2017 – Session Resources

This week I attended the Dev Up Conference in St. Louis.  I thought that the (new) venue, food, and speakers were all excellent, and improved upon past editions of the conference.  Kudos to all involved; they must have put in a lot of hard work.

As usual after attending a conference such as this, I attempt to accumulate links to as many of the session resources as I can find (on twitter, youtube, blogs, and so on), and share them here on my blog.  I do this because I figure that I am not the only one that had to skip great sessions because they were scheduled at the same time as other equally great sessions.

So without further ado, here is the list of all session from this year’s Dev Up Conference, with as many links to additional information as I could find.  Apologies for any that I missed.

Also, a number of the sessions were recorded.  I assume that they will be posted online by the Dev Up organizers, so keep an eye on the conference web site for those.

.CSS {Display: What?}
Martine Dowden

.NET and Couchbase: Using NoSQL Is Easier Than You Think
Don Schenck

.NET, Linux and Microservices Architecture
Don Schenck

1 Billion Records IS NOT BIG DATA PEOPLE!
Steve Howard

5 Popular Choices for NoSQL on a Microsoft Platform
Matthew Groves

A Brisk Stroll Through AzureML Studio
Kevin Queen

A feature based approach to software development
Ryan Lanciaux

A Guide to JavaScript’s Scary Side
Jonathan Mills

A Lap Around Xamarin.Forms
Douglas Starnes

A Skeptics Guide to Functional Style JavaScript
Jonathan Mills

Accessibility Cookbook: 10 Easy Recipes
Martine Dowden

Adding Realtime Features to Your Applications with SignalR
Javier Lozano

Agile Delivery in a Waterfall World
John Gobble

Agile Failures: Stories From The Trenches
Philip Japikse

Agile Metrics That Matter
Clint Edmonson

Agile: You Keep Using That Word…
Philip Japikse

All The New Things: A Madcap Tour of the latest in Microsoft Web Development
Brad Tutterow

An Entrepreneur’s Tale
Randy Walker

An Extended Explanation of Caching
Tom Cudd

An Introduction to Microservices
Mike Green

Angular vs. React: A live demonstration, comparison, and discussion
Kevin Grossnicklaus

Angular, the ASP.NET Pitch
Ed Charbeneau

ASP.NET Core + React Equals Awesome
Lee Brandt

Authentication and Security Strategies for the Modern Web
Spencer Schneidenbach

Azure SQL Data Warehouse, Cloud BI
Randy Walker

Becoming an Architect
Ken Sipe

Beginner Reactive Programming with RxJS
Cory Rylan

Between Two Form Tags
Danielle Cooley

Build a JavaScript Dev Environment in 1 Hour
Cory House

Building a Chat Bot with API.ai
Erin Page

Building A Highly Scalable Service that Survived A Super Bowl
Keith Elder

Building Powerful Applications with Angular and TypeScript
David Giard

Building Reusable UI Components in ASP.NET Core MVC
Scott Addie

Building Your Evil(?) Empire with Azure Functions
Bryan Soltis

Bus Accident Management
James West

Career Management – Better than Career Development
John Maglione

Career Paths Beyond Sr. Developer
Jim Drewes

Cloud Networking: What’s Underneath?
James Nugent

Code Is Communication
Steven Hicks

Compromise Less and Deliver More with Xamarin
David Ortinau

Confronting Your Fears: Entity Framework Performance Deep-dive
Mitchel Sellers

Continuous Delivery at Enterprise Scale
Jason Whittington

Custom Middleware & Microservices with ASP.NET Core
Ondrej Balas

Data Science Platform Architecture
Ryan Metcalf

Database DevOps in Visual Studio 2017 Enterprise with ReadyRoll Core
Ronnie Hicks

Dockerize Your Development
Lee Brandt

Domain Driven Design: The Good Parts
Jimmy Bogard

Effective Data Visualization
David Giard

Electron: Desktop Development for Web Developers
Chris Woodruff

Everything I Didn’t Know About JavaScript
Brad Tutterow

Fear and (Self) Loathing in IT – A Healthy Discussion on Imposter Syndrome
Angela Dugan

Feed Your Inner Data Scientist: JavaScript Tools for Data Visualization and Filtering
Doug Mair

Forget Velocity, Let’s Talk Acceleration
Jessica Kerr

From C# 6 to C# 7, Then and Now!
David Pine

From Developer to Data Scientist
Gaines Kergosien

Getting Started with Machine Learning, for Non-Data Scientists
Yung Chou

Git Demystified
Kent Peek

Giving Clarity to LINQ Queries by Extending Expressions
Ed Charbeneau

Growing a Dev Team from Bootstrap to Enterprise
Scott Connerly

Have Your Best Season Yet: Becoming a (Microsoft) MVP
Lisa Anderson

HoloLens Mixed Reality for Fun & Profit
Gaines Kergosien

How do You Measure up? Collect the Right Metrics for the Right Reasons
Angela Dugan

How Mobile Web Works at Twitch
Matt Follett

I, for One, Welcome Our Robot Overlords: Intro to the Bot Framework
John Alexander

Implementing a Modern Web Stack in a Legacy Environment
James West

Implementing Web Security in Your ASP.NET Applications
Javier Lozano

Intellectual Property Fundamentals for the Technologist
Jeff Strauss

Intro to Hacking with the Raspberry Pi
Sarah Withee

Intro to Xamarin
Ryan Overton

Introduction to Amazon AWS
Brian Korzynski

Introduction to Angular
Muljadi Budiman

Introduction to Asynchronous Code in .NET
Bill Dinger

Introduction to Online Security
Michael Dowden

Introduction to the D3.js visualization library
Bryan Nehl

Introduction To the Microsoft Bot Framework
Becky Bertram

Javascript Asynchronous Roundup (Promises Promises…)
Mark Meadows

JavaScript Futures: ES2017 and the Road Ahead
Jeff Strauss

Jewelbots: How to Get More Girls Coding!
Jennifer Wadella

Kotlin: What’s in it For You
Douglas Starnes

Learning the Language of HTTP for a Better Data Experience in Your Mobile Apps
Chris Woodruff

Let’s Talk About Mental Health
Arthur Doler

Leveraging Microsoft Azure to enable your Internet of Things
Ralph Wheaton

Linux and Windows Containers, Not All Are Created Equal
Yung Chou

Love and Hate, Having Conversations About Going to the Cloud
Bryan Roberts

Make .NET Great Again!
Sam Basu

Managing Millennials
Jim Drewes

Maximize Professional Growth By Doing Scary Things
Steven Hicks

Mechanics and Moxie: Modernizing Quality Assurance
Kylie Schleicher

Microservice-Powered Applications – It worked for Voltron, it can work for you!
Bryan Soltis

Microservices – A Pattern for Success
David Davids

Microsoft Azure Makes Machine Learning Accessible and Affordable
Douglas Starnes

Migrating from desktop to serverless with AWS
Bryan Nehl

Mobile Development For Web Developers
Justin James

Moving into mobile with Angular 2 and Ionic Framework
Mike Hamilton

Moving into mobile with React Native
Mike Hamilton

Naked and Not Afraid: How to Better Serve Your Clients
Rick Petersen

Neural Networks: The Good Bits
Chase Aucoin

Next-level test-driven development
Alison Hawke

Optimizing Application Performance
Jason Turan

Planet scale data with CosmosDB
Bryan Roberts

Planning for Failure
Jesse Phelps

Practical Security Practices: Threat Modeling
Josh Gillespie

Productivity: How to Get Things Done in this Digital Age
Keith Elder

React for the Uninitiated
Mark Meadows

Refactoring Towards Resilience
Jimmy Bogard

ReSharper: Discover the Secrets
Ondrej Balas

Respond To and Troubleshoot Production Incidents Like an SA
Tom Cudd

Reverse Engineering a Bluetooth Lightbulb
Jesse Phelps

Securing ASP.NET Core APIs and Websites with IdentityServer4
Jeffrey St. Germain

Securing your Applications with Azure AD
Mike Green

Self-Assembling, Self-Healing Systems in the AWS cloud
James Nugent

Serilog: Logging All Grown Up
Brian Korzynski

Serverless JavaScript OMG
Burke Holland

Should I be a generalist or a specialist?
Eric Potter

Should I make the Transition to ASP.NET MVC Core? Will it Hurt?
Mitchel Sellers

Software Development to Leadership
Cori Kristoff

SQL Server For The .NET Developer
Clayton Hoyt

SQL Server Power Hour With Dan and Kathi
Dan Guzman

Strategies for learning React
Ryan Lanciaux

Survival Guide to the Robot Apocalypse – Intro to Deep Learning for Developers
Steve Howard

Swift start on iOS development
Muljadi Budiman

Take Each Day and Work on Making it Better
Dean Furness

Taking Azure Application Insights to the Next Level
Ralph Wheaton

Taming the Tentacles of Octopus
Kevin Fitzpatrick

Teaching Kids Programming
Sarah Phelps

The Business Case for UX
Danielle Cooley

The Hardest Part of Being an Architect: A Death Star Story
Rick Petersen

The Lean & Agile Transformation Playbook
Clint Edmonson

The Modern ASP.NET Tech Stack!
Sam Basu

The Power of Secrets
Sarah Withee

The Reusable JavaScript Revolution
Cory House

The Saboteur in Your Retrospectives: How Your Brain Works Against You
Arthur Doler

The Thrill of the Hunt: The Return to Exploratory Testing
Kylie Schleicher

The Two Question Code Quiz: How to Interview Programmers Effectively
Scott Connerly

To Infinity and Beyond: Build Serverless APIs
Bryan Roberts

TypeScript — JavaScript Reimagined
David Pine

Understanding Azure Resource Templates
Paul Hacker

Unit Testing Strategies & Patterns in C#
Bill Dinger

Visual Studio Code Can Do THAT?!?
Burke Holland

What C# Programmers Need to Know About Pattern Matching
Eric Potter

What Is Data Science?
Ryan Metcalf

What Makes a Good Developer? – Increasing Your Value in a Polyglot World
Eric Lynn

What’s New in ASP.NET Core 2.0?
Scott Addie

What’s New in Java 9
Billy Korando

What’s New in VS 2017 and C# 7
Doug Mair

Why Aren’t There More Women Developers?
Jennifer Wadella

Windows IoT Core Development on a Raspberry Pi
Kevin Grossnicklaus

You Got Your Dev in My Ops, You Got Your Ops in My Dev
Paul Hacker

Your JavaScript Needs Types
Spencer Schneidenbach

Advertisements

Amazon Fire HD 8 Crashing My Wireless Router?!

I took advantage of Amazon Prime day this year and picked up a new Amazon Fire HD 8 tablet. While setting it up, I ran into problems when the wireless connection kept dropping. Others in the family had reported Wifi problems earlier in the day, so I didn’t think too much of it. Rebooted the cable modem and wireless router, but the problems persisted. Figured it was a problem at the cable company and decided to just let it go until the next day.

Unfortunately, the problems persisted into the next day… until I turned off Wifi on the Fire HD 8! It turns out that something about the Fire HD 8 was crashing my wireless router. This could be consistently demonstrated: the router crashed repeatedly every time I turned on Wifi on the Fire HD 8, and recovered when I turned off Wifi.

After doing some online research and finding a number of unusual solutions that were purported to work (make sure to check the “Hide Password” box on the Fire HD 8 before connecting? really?), I was able to track down the real problem (and a solution).

Wireless routers can be configured with a variety of different security options. These options control who has access to the network; it can be left entirely open for anyone to use, or can be secured using a variety of different protocols and encryption strategies.

Two of the strategies are to leave the network open or to use WEP security. Neither are good options: an open network is a bad idea, and WEP is a weak protocol.

Better options are WPA and WPA2, WPA2 being the most secure. My router’s configuration page includes the following description of the security options:

“Use ‘WPA or WPA2’ mode to achieve a balance of strong security and best compatibility. This mode uses WPA for legacy clients while maintaining higher security with stations that are WPA2 capable. Also the strongest cipher that the client supports will be used. For best security, use ‘WPA2 Only’ mode. This mode uses AES cipher. For maximum compatibility, use ‘WPA Only’. This mode uses TKIP cipher.”

Another thing to note is that the WPA2 with AES option allows for the highest wireless rate (generally around 130Mbps). WPA with TKIP is capped at a rate of 56Mbps.

How does all of this relate to the problem I was having with my Amazon Fire HD 8 tablet? My router had been configured as suggested, in ‘WPA or WPA2’ mode, for maximum compatibility. When I changed the configuration to WEP with TKIP only, the problems went away! It seems that something about my router, the Amazon Fire HD 8, and WPA2 with AES was a bad combination.

The solution was to configure the “Guest Zone” on my router, which effectively sets up a second network. I left my original network configured in ‘WPA or WPA2’ mode, so that all of my existing devices could take advantage of the better security and higher wireless throughput rate of WPA2 (assuming they supported it). The new “Guest” network was configured with ‘WPA Only’ and the TPIK cipher. I connected the Fire HD 8 to the “Guest” network, and now all of my devices (including the router!) are able to coexist peacefully together. As far as the lower throughput rate on the Guest network required by the ‘WPA Only’ option, it does not seem to be an issue. I have been able to stream video on the Fire HD 8 from several locations in the house, both near and relatively far from the wireless router.

Hope this information helps someone else!

 

Finding Your Windows System’s Last Boot Time

To find the last boot time of your computer, use the following command:

systeminfo | find “System Boot Time”

It will produce output that looks something like this:

System Boot Time:    11/11/2016, 10:11:21 AM

The systeminfo command sends detailed information about the computer and the operating system to the standard output.  The “|” redirects that output to the find command, which searches it for the string “System Boot Time”.  The find command then outputs the line or lines of the systeminfo output that match the string “System Boot Time”.

This should work on Windows 7 and later, and on Windows Server 2008 and later.

Data Access Framework Comparison

Introduction

For some time now I have been working on a project that utilizes a custom-built data access framework, rather than popular ORM frameworks such as Entity Framework or NHibernate.

While the custom framework has worked well for the project, I had questions about it.  For example, it uses stored procedures to implement basic CRUD operations, and I wondered if inline parameterized SQL statements might perform better.  Also, I wondered about the performance of the custom framework compared to the leading ORMs.

Besides my questions about the custom framework, I recognized the importance of having at least a basic understanding of how to use the other ORM frameworks.

In order to answer my questions about the custom framework and to gain some practical experience with the other ORMs, I created a simple web application that uses each of those frameworks to perform basic CRUD applications.  While executing the CRUD operations, the application times them and produces a summary report of the results.

The code for the test application can be found at https://github.com/mlichtenberg/ORMComparison.

NOTE: I assume that most readers are familiar with the basics of Entity Framework and NHibernate, so I will not provide an overview of them here.

Using the custom framework is similar to Entity Framework and NHibernate’s “database-first” approach.  Any project that uses the library references a single assembly containing the base functionality of the library.  A T4 template is used to generate additional classes based on tables in a SQL Server database.  Some of the classes are similar to EF’s Model classes and NHibernate’s Domain classes.  The others provide the basic CRUD functionality for the domain/model classes. 

For these tests I made a second copy of the custom framework classes that provide the basic CRUD functionality, and edited them to replace the CRUD stored procedures with parameterized SQL statements.

The custom framework includes much less overhead on top of ADO.NET than the popular ORMs, so I expected the tests to show that it was the best-performing framework.  The question was, how much better?

In the rest of this post, I will describe the results of my experiment, as well as some of the optimization tips I learned along the way.  Use the following links to jump directly to a topic.

Test Application Overview
“Out-of-the-Box” Performance
Entity Framework Performance After Code Optimization
     AutoDetectChangesEnabled and DetectChanges()
     Recycling the DbContext
NHibernate Performance After Configuration Optimization
     What’s Up with Update Performance in NHibernate?
Results Summary

Test Application Overview

    A SQL Express database was used for the tests.  The data model is borrowed from Microsoft’s Contoso University sample application.  Here is the ER diagram for the database:

image

 

The database was pre-populated with sample data.  The number of rows added to each table were:

Department: 20
Course: 200
Person: 100000
Enrollment: 200000

This was done because SQL Server’s optimizer will behave differently with an empty database than it will with a database containing data, and I wanted the database to respond as it would in a “real-world” situation.  For the tests, all CRUD operations were performed against the Enrollment table.

Five different data access frameworks were tested:

  1. Custom framework with stored procedures
  2. Custom framework with parameterized SQL statements
  3. Entity Framework
  4. NHibernate
  5. Fluent NHibernate

The testing algorithm follows the same pattern for each of the frameworks:

01) Start timer
02) For a user-specified number of iterations 
03)      Submit an INSERT statement to the database
04)      Save the identifier of the new database record
05) End timer
06) Start timer
07) For each new database record identifier
08)      Submit a SELECT statement to the database
09) End timer
10) Start timer
11) For each new database record identifier
12)      Submit an UPDATE statement to the database
13) End timer
14) Start timer
15) For each new database record identifier
16)      Submit a DELETE statement to the database
17) End timer

Note that after the test algorithm completes, the database is in the same state as when the tests began.

To see the actual code, visit https://github.com/mlichtenberg/ORMComparison/blob/master/MVCTestHarness/Controllers/TestController.cs.

"Out-of-the-Box" Performance

I first created very basic tests for each framework. Essentially, these were the “Hello World” versions of the CRUD code for each framework.  No optimization was attempted.

Here is an example of the code that performs the INSERTs for the custom framework.  There is no difference between the version with stored procedures and the version without, other than the namespace from which EnrollmentDAL is instantiated.

    DA.EnrollmentDAL enrollmentDAL = new DA.EnrollmentDAL();

    for (int x = 0; x < Convert.ToInt32(iterations); x++)
    {
        DataObjects.Enrollment enrollment = enrollmentDAL.EnrollmentInsertAuto
            (null, null, 101, 1, null);
        ids.Add(enrollment.EnrollmentID);
    }

      And here is the equivalent code for Entity Framework:

    using (SchoolContext db = new SchoolContext())
    {
       for (int x = 0; x < Convert.ToInt32(iterations); x++)
        {
            Models.Enrollment enrollment = new Models.Enrollment {
                CourseID = 101, StudentID = 1, Grade = null };
            db.Enrollments.Add(enrollment);
            db.SaveChanges();
            ids.Add(enrollment.EnrollmentID);
        }

    }

    The code for NHibernate and Fluent NHibernate is almost identical.  Here is the NHibernate version:

using (var session = NH.NhibernateSession.OpenSession("SchoolContext"))
{
    var course = session.Get<NHDomain.Course>(101);
    var student = session.Get<NHDomain.Person>(1);

    for (int x = 0; x < Convert.ToInt32(iterations); x++)
    {
        var enrollment = new NHDomain.Enrollment { 
            Course = course, Person = student, Grade = null };
        session.SaveOrUpdate(enrollment);

        ids.Add(enrollment.Enrollmentid);
    }

}

The SELECT, UPDATE, and DELETE code for each framework followed similar patterns. 

    NOTE: A SQL Server Profiler trace proved that the actual interactions with the database were the same for each framework.  The same database connections were established, and equivalent CRUD statements were submitted by each framework.  Therefore, any measured differences in performance are due to the overhead of the frameworks themselves.

        Here are the results of the tests of the “out-of-the-box” code:

      Framework              Operation     Elapsed Time (seconds)
      Custom                 Insert        5.9526039
      Custom                 Select        1.9980745
      Custom                 Update        5.0850357
      Custom                 Delete        3.7785886

      Custom (no SPs)        Insert        5.2251725
      Custom (no SPs)        Select        2.0028176
      Custom (no SPs)        Update        4.5381994
      Custom (no SPs)        Delete        3.7064278

      Entity Framework       Insert        1029.5544975
      Entity Framework       Select        8.6153572
      Entity Framework       Update        2362.7183765
      Entity Framework       Delete        25.6118191

      NHibernate             Insert        9.9498188
      NHibernate             Select        7.3306331
      NHibernate             Update        274.7429862
      NHibernate             Delete        12.4241886

      Fluent NHibernate      Insert        11.796126
      Fluent NHibernate      Select        7.3961941
      Fluent NHibernate      Update        283.1575124
      Fluent NHibernate      Delete        10.791648

      NOTE: For all tests, each combination of Framework and Operation was executed 10000 times.   Looking at the first line of the preceding results, this means that Custom framework took 7.45 seconds to perform 10000 INSERTs.

      As you can see, both instances of the the custom framework outperformed Entity Framework and NHibernate.  In addition, the version of the custom framework that used parameterized SQL was very slightly faster than the version that used stored procedures.  Most interesting however, was the performance for INSERT and UPDATE operations.  Entity Framework and both versions of NHibernate were not just worse than the two custom framework versions, they were much MUCH worse.  Clearly, some optimization and/or configuration changes were needed.

      Entity Framework Performance After Code Optimization

      AutoDetectChangesEnabled and DetectChanges()  

      It turns out that much of Entity Framework’s poor performance appears to have been due to the nature of the tests themselves.  Information on Microsoft’s MSDN website notes that if you are tracking a lot of objects in your DbContext object and call methods like Add() and SaveChanges() many times in a loop, your performance may suffer.  That scenario describes the test almost perfectly.

      The solution is to turn off Entity Framework’s automatic detection of changes by setting AutoDetectChangesEnabled to false and explicitly calling DetectChanges().  This instructs Entity Framework to only detect changes to entities when explicitly instructed to do so.  Here is what the updated code for performing INSERTs with Entity Framework looks like (changes highlighted in red):

      using (SchoolContext db = new SchoolContext())
      {
          db.Configuration.AutoDetectChangesEnabled = false;

          for (int x = 0; x < Convert.ToInt32(iterations); x++)
          {
              Models.Enrollment enrollment = new Models.Enrollment {
                  CourseID = 101, StudentID = 1, Grade = null };
              db.Enrollments.Add(enrollment);
              db.ChangeTracker.DetectChanges();
              db.SaveChanges();
              ids.Add(enrollment.EnrollmentID);
          }
      }

      Here are the results of tests with AutoDetectChangesEnabled set to false:

      Framework           Operation    Elapsed Time (seconds)
      Entity Framework    Insert       606.5569332
      Entity Framework    Select       6.4425741
      Entity Framework    Update       605.6206616
      Entity Framework    Delete       21.0813293

      As you can see, INSERT and UPDATE performance improved significantly, and SELECT and DELETE performance also improved slightly.

      Note that turning off AutoDetectChangesEnabled and calling DetectChanges() explicitly in all cases WILL slightly improve the performance of Entity Framework.  However, it could also cause subtle bugs.  Therefore, it is best to only use this optimization technique in very specific scenarios and allow the default behavior otherwise.

      Recycling the DbContext

      While Entity Framework performance certainly improved by changing the AutoDetectChangesEnabled value, it was still relatively poor. 

      Another problem with the tests is that the same DbContext was used for every iteration of an operation (i.e. one DbContext object was used for all 10000 INSERT operations).  This is a problem because the context maintains a record of all entities added to it during its lifetime.  The effect of this was a gradual slowdown of the INSERT (and UPDATE) operations as more and more entities were added to the context.

      Here is what the Entity Framework INSERT code looks like after modifying it to periodically create a new Context (changes highlighted in red):

      for (int x = 0; x < Convert.ToInt32(iterations); x++)
      {
          // Use a new context after every 100 Insert operations
          using (SchoolContext db = new SchoolContext())
          {
              db.Configuration.AutoDetectChangesEnabled = false;

              int count = 1;
              for (int y = x; y < Convert.ToInt32(iterations); y++)
              {
                  Models.Enrollment enrollment = new Models.Enrollment {
                      CourseID = 101, StudentID = 1, Grade = null };
                  db.Enrollments.Add(enrollment);
                  db.ChangeTracker.DetectChanges();
                  db.SaveChanges();
                  ids.Add(enrollment.EnrollmentID);

                  count++;
                  if (count >= 100) break;
                  x++;
              }
          }
      }

      And here are the results of the Entity Framework tests with the additional optimization added:

      Framework            Operation     Elapsed Time (seconds)
      Entity Framework     Insert        14.7847024
      Entity Framework     Select        5.5516514
      Entity Framework     Update        13.823694
      Entity Framework     Delete        10.0770142

      Much better!  The time to perform the SELECT operations was little changed, but the DELETE time was reduced by half, and the INSERT and UPDATE times decreased from a little more than 10 minutes to about 14 seconds.

      NHibernate Performance After Configuration Optimization

      For the NHibernate frameworks, the tests themselves were not the problem.  NHibernate itself needs some tuning. 

      An optimized solution was achieved by changing the configuration settings of the NHibernate Session object.  Here is the definition of the SessionFactory for NHibernate (additions highlighted in red):

      private static ISessionFactory SessionFactory
      {
          get
          {
              if (_sessionFactory == null)
              {
                  string connectionString = ConfigurationManager.ConnectionStrings
                      [_connectionKeyName].ToString();

                  var configuration = new NHConfig.Configuration();
                  configuration.Configure();

                  configuration.SetProperty(NHConfig.Environment.ConnectionString,
                      connectionString);

                  configuration.SetProperty(NHibernate.Cfg.Environment.FormatSql,
                      Boolean.FalseString);
                  configuration.SetProperty
                     (NHibernate.Cfg.Environment.GenerateStatistics,
                          Boolean.FalseString);
                  configuration.SetProperty
                     (NHibernate.Cfg.Environment.Hbm2ddlKeyWords,
                          NHConfig.Hbm2DDLKeyWords.None.ToString());
                  configuration.SetProperty(NHibernate.Cfg.Environment.PrepareSql,
                          Boolean.TrueString);
                  configuration.SetProperty
                      (NHibernate.Cfg.Environment.PropertyBytecodeProvider,
                          "lcg");
                  configuration.SetProperty
                      (NHibernate.Cfg.Environment.PropertyUseReflectionOptimizer,
                          Boolean.TrueString);
                  configuration.SetProperty
                      (NHibernate.Cfg.Environment.QueryStartupChecking,
                          Boolean.FalseString);
                  configuration.SetProperty(NHibernate.Cfg.Environment.ShowSql, 
                      Boolean.FalseString);
                  configuration.SetProperty
                      (NHibernate.Cfg.Environment.UseProxyValidator, 
                          Boolean.FalseString);
                  configuration.SetProperty
                      (NHibernate.Cfg.Environment.UseSecondLevelCache,
                          Boolean.FalseString);

                  configuration.AddAssembly(typeof(Enrollment).Assembly);
                  _sessionFactory = configuration.BuildSessionFactory();
              }
              return _sessionFactory;
          }
      }

      And here is the InitializeSessionFactory method for Fluent NHibernate, with the equivalent changes included:

      private static void InitializeSessionFactory()
      {
          string connectionString = ConfigurationManager.ConnectionStrings[_connectionKeyName]
              .ToString();

          _sessionFactory = Fluently.Configure()
              .Database(MsSqlConfiguration.MsSql2012.ConnectionString(connectionString).ShowSql())
              .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Enrollment>())
              .BuildConfiguration().SetProperty
                  (NHibernate.Cfg.Environment.FormatSql, Boolean.FalseString)
              .SetProperty(NHibernate.Cfg.Environment.GenerateStatistics,
                  Boolean.FalseString)
              .SetProperty(NHibernate.Cfg.Environment.Hbm2ddlKeyWords,
                  NHibernate.Cfg.Hbm2DDLKeyWords.None.ToString())
              .SetProperty(NHibernate.Cfg.Environment.PrepareSql,
                  Boolean.TrueString)
              .SetProperty(NHibernate.Cfg.Environment.PropertyBytecodeProvider,
                  "lcg")
              .SetProperty
                  (NHibernate.Cfg.Environment.PropertyUseReflectionOptimizer,
                      Boolean.TrueString)
              .SetProperty(NHibernate.Cfg.Environment.QueryStartupChecking,
                  Boolean.FalseString)
              .SetProperty(NHibernate.Cfg.Environment.ShowSql, Boolean.FalseString)
              .SetProperty(NHibernate.Cfg.Environment.UseProxyValidator,
                  Boolean.FalseString)
              .SetProperty(NHibernate.Cfg.Environment.UseSecondLevelCache,
                  Boolean.FalseString)
              .BuildSessionFactory();
      }

      The following table gives a brief description of the purpose of these settings:

      Setting                   Purpose
      FormatSql                 Format the SQL before sending it to the database
      GenerateStatistics        Produce statistics on the operations performed
      Hbm2ddlKeyWords           Should NHibernate automatically quote all db object names
      PrepareSql                Compiles the SQL before executing it
      PropertyBytecodeProvider  What bytecode provider to use for the generation of code
      QueryStartupChecking      Check all named queries present in the startup configuration
      ShowSql                   Show the produced SQL
      UseProxyValidator         Validate that mapped entities can be used as proxies
      UseSecondLevelCache       Enable the second level cache

      Notice that several of these (FormatSQL, GenerateStatistics, ShowSQL) are most useful for debugging.  It is not clear why they are enabled by default in NHibernate; it seems to me that these should be opt-in settings, rather than opt-out.

      Here are the results of tests of the NHibernate frameworks with these changes in place:

      Framework                        Operation     Elapsed Time (seconds)
      NHibernate (Optimized)           Insert        5.0894047
      NHibernate (Optimized)           Select        5.2877312
      NHibernate (Optimized)           Update        133.9417387
      NHibernate (Optimized)           Delete        5.6669841

      Fluent NHibernate (Optimized)    Insert        5.0175024
      Fluent NHibernate (Optimized)    Select        5.2698945
      Fluent NHibernate (Optimized)    Update        128.3563561
      Fluent NHibernate (Optimized)    Delete        5.5299521

      These results are much improved, with the INSERT, SELECT, and DELETE operations nearly matching the results achieved by the custom framework.   The UPDATE performance, while improved, is still relatively poor.

      What’s Up with Update Performance in NHibernate?

      The poor update performance is a mystery to me.  I have researched NHibernate optimization techniques and configuration settings, and have searched for other people reporting problems with UPDATE operations.  Unfortunately, I have not been able to find a solution.

      This is disappointing, as I personally found NHibernate more comfortable to work with than Entity Framework, and because it beats or matches the performance of Entity Framework for SELECT, INSERT, and DELETE operations.

      If anyone out there knows of a solution, please leave a comment!

      Final Results

      The following table summarizes the results of the tests using the optimal configuration for each framework.  These are the same results shown earlier in this post, combined here in a single table.

      Framework                        Operation     Elapsed Time (seconds)
      Custom                           Insert        5.9526039
      Custom                           Select        1.9980745
      Custom                           Update        5.0850357
      Custom                           Delete        3.7785886

      Custom (no SPs)                  Insert        5.2251725
      Custom (no SPs)                  Select        2.0028176
      Custom (no SPs)                  Update        4.5381994
      Custom (no SPs)                  Delete        3.7064278

      Entity Framework (Optimized)     Insert        14.7847024
      Entity Framework (Optimized)     Select        5.5516514
      Entity Framework (Optimized)     Update        13.823694
      Entity Framework (Optimized)     Delete        10.0770142

      NHibernate (Optimized)           Insert        5.0894047
      NHibernate (Optimized)           Select        5.2877312
      NHibernate (Optimized)           Update        133.9417387
      NHibernate (Optimized)           Delete        5.6669841

      Fluent NHibernate (Optimized)    Insert        5.0175024
      Fluent NHibernate (Optimized)    Select        5.2698945
      Fluent NHibernate (Optimized)    Update        128.3563561
      Fluent NHibernate (Optimized)    Delete        5.5299521

      And here is a graph showing the same information:

      image

    That Conference 2016–Session Resources

    Last week I had the pleasure of attending the 2016 edition of That Conference.

    It was an all-around excellent experience.  The venue, topics, speakers, sponsors, food, after-hours events, and swag all left little to complain about.  In addition, many technical conferences include areas/times for free-form open discussions led by conference attendees on topics of their choosing, and That Conference is no exception.  That Conference’s version of this was called Open Spaces, and by all accounts it was a success.  While I only took part in a single discussion, I observed that the area designated for those discussions was never not busy.

    Conference experiences can be spoiled by inexperienced or ill-prepared session speakers.  At That Conference I was pleased by the quality of the speakers in all twelve sessions and three keynotes that I attended.  However, there were too many interesting sessions (a good thing!) and too little time (can’t be helped).  Therefore, since returning home I have been watching social media and the conference website in order to compile links to as many of the session materials as possible.

    Here are the links to everything that I have been able to find.  (If you know if others, please post a comment with the links!)

    Against Best Practices – Embracing the Avant Garde for a Weirder Web
    Chelsea Maxwell

    As Seen On TV: Developing Apps for Apple TV and TVOS
    Matthew Soucoup

    Back to the Future => C# 7
    Mike Harris

    Battle of the CLI: Gulp vs. Grunt
    Abbey Gwayambadde

    Be An Expert Xamarin Outdoorsman with the Ultimate Xamarin Toolchain
    Vince Bullinger

    Bear Proof Applications: Using Continuous Security to Mitigate Threats
    Wendy Istanick

    Boost Your Immune System with DevOps
    Michelle Munstedt

    Build and Deploy Your ASP.NET Core Applications… Automatically!
    Brandon Martinez

    Build Your Own Smart Home
    Brandon Satrom

    Building Mobile Games That Make Money
    Scott Davis

    C#: You Don’t Know Jack
    George Heeres

    Clean Architecture: Patterns, Practices, and Principles
    Matthew Renze

    Common T-SQL Mistakes
    Kevin Boles

    Computer Science: The Good Parts
    Jeffery Cohen

    Daring to Develop With Docker
    Philip Nelson

    Date and Time: Odds, Ends, and Oddities
    Maggie Pint

    Domain Driven Data
    Bradley Holt

    Enough Cryptography to be Dangerous
    Steve Marx

    The Experimentation Mindset
    Doc Norton

    Finding Your Way to the App Store
    Matthew Ridley

    From Inception to Production: A Continuous Delivery Story
    Ian Randall

    From Mobile First to Offline First
    Bradley Holt

    Full-Stack ASP.NET MVC Performance Tuning
    Dustin Ewers

    Happy Full-Stack Javascript Campers
    Ryan Niemeyer

    How I Learned To Love Dependency Injection
    James Bender

    Identity Management in ASP.NET Core
    Ondrej Balas

    An Internet Of Beers
    Wade Wegner

    Intro to Typescript
    Jody Gustafson

    Introduction to Angular 2.0
    Jeremy Foster

    Javascript Code Quality
    Md Khan

    Keynote: Family Keynote
    Neely Drake and Emily Davis

    Keynote: From 0 to 100,000: How Particle Failed, then Succeeded, then Scaled
    Zach Supalla

    Keynote: Stop Writing Code
    Keith Casey

    Keynote: You Have Too Much Time
    Jeff Blankenburg

    Mastering Voice UX Featuring Amazon’s Echo (AKA Alexa)
    Chris Pauly

    A Microservices Architecture That Emphasizes Rapid Development
    Rob Hruska

    Microsoft Bot Framework: Hiking Up the Trail of Automation
    David Hauck

    The Millennials R Coming
    Heather Shapiro

    Node.JS Crash Course
    David Neal

    Not Just Arts & Crafts: A Developer’s Guide to Incorporating Lean UX Practices into the Development Process
    Rachel Krause

    Out With the Old, In With the New: A Comparison of Angular 1 and 2
    Tony Gemoll

    Pavlov Yourself!
    Alexandra Feldman

    React Native to the Rescue
    Josh Gretz

    React vs. Angular – Dawn of Changes
    John Ptacek

    ReactJS For Beginners
    Arthur Kay

    Ruby on Rails from 0 to Deploy in 60 Minutes
    Chris Johnson

    Ruby Writing Ruby – Campfire Tales of Metaprogramming
    Sara Gibbons

    Service Bus Summer Camp
    David Boike

    So Many Analytics Tools, Which One Is Right For Me?
    Jason Groom

    Start Your Own Business, Dammit!
    Terra Fletcher

    A Tale of Two Redesigns
    Jess Bertling

    Tell SQL Server Profiler To Take A Hike
    Jes Borland

    Understanding Git, Part 2
    Keith Dahlby

    UX Beyond the UI – How the Rest of Software Development Affects User Experience
    Joe Regan

    Why Your Site Is Slow
    Steve Persch

    Working From Whereever
    Aaron Douglas

    Log Parser – Transforming Plain Text Files

    This post describes how to solve a specific problem with Microsoft’s Log Parser tool.  For background on the tool (and lots of examples), start here.

    The Problem

    Given a file named MyLog.log that looks like this…

    ip=0.0.0.0 date=20160620 time=06:00:00 device=A23456789 log=00013
    ip=0.0.0.1 date=20160621 time=06:00:01 device=A13456789 log=00014
    ip=0.0.0.2 date=20160622 time=06:00:02 device=A12456789 log=00015
    ip=0.0.0.3 date=20160623 time=06:00:03 device=A12356789 log=00016
    ip=0.0.0.4 date=20160624 time=06:00:04 device=A12346789 log=00017
    ip=0.0.0.5 date=20160625 time=06:00:05 device=A12345789 log=00018
    ip=0.0.0.6 date=20160626 time=06:00:06 device=A12345689 log=00019
    ip=0.0.0.7 date=20160627 time=06:00:07 device=A12345679 log=00020
    ip=0.0.0.8 date=20160628 time=06:00:08 device=A12345678 log=00021
    ip=0.0.0.9 date=20160629 time=06:00:09 device=A123456789 log=00022

    …transform it into a tab-separated file with a header row.  Each field should include only the field value (and not the field name).

    Notice that the original file has no header, the fields are separated with spaces, and the field name is part of each field (i.e. "ip=").

    The Solution

    Step 1)

    logparser -i:TSV -iSeparator:space -headerRow:OFF
         "select * into ‘MyLogTemp.log’ from ‘MyLog.log’"
         -o:TSV -oSeparator:space -headers:ON

    In this command, -i:TSV -iSeparator:space informs Log Parser that the input file is a space-separated text file, and -headerRow:OFF lets Log Parser know that the file has no headers.  Likewise, -o:TSV -oSeparator:space -headers:ON tells Log Parser to output a space-separated text file with headers.

    This produces a file named MyLogTemp.log with the following content:

    Filename RowNumber Field1 Field2 Field3 Field4 Field5
    MyLog.log 1 ip=0.0.0.0 date=20160620 time=06:00:00 device=A23456789 log=00013
    MyLog.log 2 ip=0.0.0.1 date=20160621 time=06:00:01 device=A13456789 log=00014
    MyLog.log 3 ip=0.0.0.2 date=20160622 time=06:00:02 device=A12456789 log=00015
    MyLog.log 4 ip=0.0.0.3 date=20160623 time=06:00:03 device=A12356789 log=00016
    MyLog.log 5 ip=0.0.0.4 date=20160624 time=06:00:04 device=A12346789 log=00017
    MyLog.log 6 ip=0.0.0.5 date=20160625 time=06:00:05 device=A12345789 log=00018
    MyLog.log 7 ip=0.0.0.6 date=20160626 time=06:00:06 device=A12345689 log=00019
    MyLog.log 8 ip=0.0.0.7 date=20160627 time=06:00:07 device=A12345679 log=00020
    MyLog.log 9 ip=0.0.0.8 date=20160628 time=06:00:08 device=A12345678 log=00021
    MyLog.log 10 ip=0.0.0.9 date=20160629 time=06:00:09 device=A123456789 log=00022

    This hasn’t done much.  In fact is has added some stuff that is not relevant (the Filename and RowNumber columns), while leaving field names in each fields and maintaining the space field separator.  However, it HAS added headers (Field1, Field2, ect), which are needed for the second step.

    Step 2)

    logparser -i:TSV -iSeparator:space -headerRow:ON
         "select REPLACE_STR(Field1, ‘ip=’, ”) AS ip,
                   REPLACE_STR(Field2, ‘date=’, ”) AS date,
                   REPLACE_STR(Field3, ‘time=’, ”) AS time,
                   REPLACE_STR(Field4, ‘device=’, ”) AS device,
                   REPLACE_STR(Field5, ‘log=’, ”) AS log
         into ‘MyLogTransformed.log’
         from ‘MyLogTemp.log’"
         -o:TSV -oSeparator:tab -headers:ON

    The input and output specifications in this command are similar to those in Step 1, except here the input file has headers (-headerRow:ON) and the output file is tab-separated (-oSeparator:tab) instead of space-separated.  The main difference is in the SELECT statement itself, where the use of the REPLACE_STR function removes the field names from the field values and the AS statement assigns the desired headers to each column of data.  Notice that the REPLACE_STR function uses the headers that were added in Step 1.

    This produces the final result in a file named MyLogTransformed.log:

    ip     date     time     device     log
    0.0.0.0     20160620     06:00:00     A23456789     00013
    0.0.0.1     20160621     06:00:01     A13456789     00014
    0.0.0.2     20160622     06:00:02     A12456789     00015
    0.0.0.3     20160623     06:00:03     A12356789     00016
    0.0.0.4     20160624     06:00:04     A12346789     00017
    0.0.0.5     20160625     06:00:05     A12345789     00018
    0.0.0.6     20160626     06:00:06     A12345689     00019
    0.0.0.7     20160627     06:00:07     A12345679     00020
    0.0.0.8     20160628     06:00:08     A12345678     00021
    0.0.0.9     20160629     06:00:09     A123456789     00022

    More Information

    See Log Parser’s built-in help for additional explanations of the Log Parser features used in the solution.  In particular, look at the following:

    logparser -h
    logparser -h -i:TSV
    logparser -h -o:TSV
    logparser -h FUNCTIONS REPLACE_STR

    Recommended Tool: Express Profiler for SQL Server Databases

    NOTE:  As I was writing up this post I discovered the news that SQL Profiler is deprecated as of the release of SQL Server 2016.  If this also affects the underlying SQL Server tracing APIs, then this news may affect the long-term future of the Express Profiler.  For now, however, it is a tool that I recommend.

    Express Profiler is a simple Open Source alternative to the SQL Profiler that ships with the full SQL Server Management Studio.  This is particularly useful when working with SQL Server Express databases, as the Express version of the Management Studio does NOT include the SQL Profiler.

    Usage of the Express Profiler should be self-explanatory to anyone familiar with the SQL Profiler.

    Here are some details about Express Profiler from the project page:

    • ExpressProfiler (aka SqlExpress Profiler) is a simple and fast replacement for SQL Server Profiler with basic GUI
    • Can be used with both Express and non-Express editions of SQL Server 2005/2008/2008r2/2012/2014 (including LocalDB)
    • Tracing of basic set of events (Batch/RPC/SP:Stmt Starting/Completed, Audit login/logout, User error messages, Blocked Process report) and columns (Event Class, Text Data,Login, CPU, Reads, Writes, Duration, SPID, Start/End time, Database/Object/Application name) – both selectable
    • Filters on most data columns
    • Copy all/selected event rows to clipboard in form of XML
    • Find in "Text data" column
    • Export data in Excel’s clipboard format

    While I have found Express Profiler to be a good and useful tool, it is not as fully-featured as the SQL Profiler.  Here are some key "missing" features in Express Profiler:

    • No way to load a saved trace output, although that feature is on the roadmap for the tool.
    • No way to save trace output directly to a database table.
    • Fewer columns can be included in the trace output, and many fewer events can be traced.  In my experience, however, the columns and events that I find myself using in most cases are all available.
    • As there are fewer columns in the output, there are fewer columns on which to filter.  Again, the most common/useful columns and events are covered.
    • No way to create trace templates for use with future traces.

    Despite these limitations, I recommend this tool for situations where the full SQL Profiler is not available.