Alexander Manekovskyi

Writing About Tech

Better Command Line Experience on Windows With ConEmu, Clink and Oh My Posh

Recently, I stumbled upon Brad Wilson’s post - Anatomy of a Prompt (PowerShell) and decided that I also want to have a fancy-looking command prompt for a cmd.exe. Fanciness includes but not limited to:

  • A custom prompt that display computer name and current user, git status, and features pretty looking powerlines
  • Persistent commands history
  • Command completion, aliases/macros support + their expansion on demand

This is what my console looks like after all modifications:

Awesome looking command line prompt

At first, I was planning to give an overview of my current setup, but then the description grew, and now I have a detailed guide about how to improve the look and feel of the CMD.

Table of contents:

Set Command Aliases/Macros For CMD.exe In ConEmu

For many years I have been a loyal and happy user of ConEmu. This tool is great - it is reliable, fast, and highly configurable. ConEmu has a portable version, so setup replication is not a problem - I keep it in the ever-growing list of utilities on my cloud storage.

One of the issues with cmd is the absence of persistent user-scoped command aliases or macros. Yes, there is a DOSKEY command, but you are required to integrate its invocation into your cmd startup. To get more control over my macros setup, I wrote a utility (GitHub - manekovskiy/aliaser) that pulls a list of command aliases from the file and sets them up for the current process.

ConEmu provides native support for command aliases; refer to ConEmu | Settings › Environment page for more details.

My setup is following:

  • Put compiled aliaser.exe and a file containing aliases (my list - GitHub - aliases.txt) to the %ConEmuBaseDir%\Scripts folder.
  • Add a batch file containing the invocation of the aliaser utility to the %ConEmuBaseDir%\Scripts folder.
setup-aliases.cmd
1
2
@echo off
call "%~dp0aliaser.exe" -f "%~dp0aliases.txt"
  • Update ConEmu CMD tasks to include setup-aliases.cmd invocation. Example: cmd.exe /k setup-aliases.cmd

There is no need to provide a path to the batch file because the default ConeEmu setup adds a %ConEmuBaseDir%\Scripts folder to the PATH environment variable.

Integrate Clink

Another great addition to the cmd is a Clink - it augments the command line with many great features like persistent history, environment variable names completion, scriptable keybindings, and command completions.

Again, ConEmu provides integration with Clink (see ConEmu | cmd.exe and clink). It is important to note that ConEmu works well only with the current active fork of the clink project - chrisant996/clink.

In short, to install and enable Clink in ConEmu, you should extract the contents of the clink release archive into the %ConEmuBaseDir%\clink and check the “Use Clink in prompt” under the Features settings section.

Enable "Use Clink in prompt" configuration setting

An indicator of successful integration is the text mentioning the Clink version and its authors, similar to the following:

Clink v1.2.9.329839
Copyright (c) 2012-2018 Martin Ridgers
Portions Copyright (c) 2020-2021 Christopher Antos
https://github.com/chrisant996/clink

Configure Clink Completions

One of the most powerful features of the Clink is that it is scriptable through Lua. It is possible to add custom command completion logic, add or change the keybindings or even modify the look of the prompt.

On startup, Clink looks for a clink.lua script, which is an entry point for all extension registrations. There are a couple of places where Clink tries to locate the file, one of them is the %CLINK_INPUTRC% folder. There should be an empty clink.lua file in the %ConEmuBaseDir%\clink folder (comes as a part of the Clink release). To make it visible to ConEmu and Clink, add a CLINK_INPUTRC variable to the ConEmu Environment configuration: set CLINK_INPUTRC=%ConEmuBaseDir%\clink.

Add "CLINK_INPUTRC" variable to the ConEmu Startup settings

Not so long ago, I found that Cmder (a quite opinionated build of ConEmu) distribution already contains Clink and completion files for all super popular command-line utilities. A little bit of search showed that completions in Cmder come from the GitHub - vladimir-kotikov/clink-completions repository.

Download the latest available clink-completions release and unpack it in the %ConEmuBaseDir%\clink\profile. I decided to drop the version number from the clink-completions folder name, so I would not have to update the registration script every time I update the completions. I also made the registration of the Clink extensions maximally universal. I went with a convention-based approach to files and folders organization:

  • Each Clink extension should be put into a separate folder under %ConEmuBaseDir%\clink\profile. This ensures proper grouping and logical separation of scripts.
  • Each Clink extension group should define a registration script under %ConEmuBaseDir%\clink\profile.
  • Code in clink.lua should locate all registration scripts under %ConEmuBaseDir%\clink\profile and unconditionally execute them.

Example folder structure:

📂 %ConEmuBaseDir%\clink
  📂 profile
    📁 clink-completions
    📁 extension-x
    📁 oh-my-posh
    📄 clink-competions.lua
    📄 oh-my-posh.lua
    📄 extension-x.lua
  📄 clink.lua

The registration script is simple:

clink.lua
1
2
3
4
5
6
7
8
9
10
11
12
13
-- clink.lua

-- Globals
__clink_dir = clink.get_env('ConEmuBaseDir')..'/clink/'
__clink_profile_dir = clink.get_env('ConEmuBaseDir')..'/clink/profile/'

-- Load profile scripts
for _,lua_module in ipairs(clink.find_files(__clink_profile_dir..'*.lua')) do
    local filename = __clink_profile_dir..lua_module
    -- use dofile instead of require because require caches loaded modules
    -- so config reloading using Alt-Q won't reload updated modules.
    dofile(filename)
end

The clink-completions registration script is also a barebone minimum. I extracted it from the Cmder’s clink.lua:

clink-completions.lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-- clink-completions.lua
-- Completions scripts taken from from https://github.com/vladimir-kotikov/clink-completions
-- Last updated on 6/5/2021. Version 0.3.7.

local completions_dir = __clink_profile_dir..'clink-completions/'
-- Execute '.init.lua' first to ensure package.path is set properly
dofile(completions_dir..'.init.lua')
for _,lua_module in ipairs(clink.find_files(completions_dir..'*.lua')) do
    -- Skip files that starts with _. This could be useful if some files should be ignored
    if not string.match(lua_module, '^_.*') then
        local filename = completions_dir..lua_module
        -- use dofile instead of require because require caches loaded modules
        -- so config reloading using Alt-Q won't reload updated modules.
        dofile(filename)
    end
end

Change The Prompt With Oh My Posh

If you never heard of it before, Oh My Posh is a command prompt theme engine. It was first created for PowerShell, but now, in V3, Oh My Posh became cross-platform with a universal configuration format, which means that you can use it in any shell or OS.

ConEmu distribution contains an initialization script file - CmdInit.cmd, which can display the current git branch and/or current user name in the command prompt (see ConEmu | Configuring Cmd Prompt for more details).

When it comes to CMD, Oh My Posh could be integrated through Clink. Clink has a concept of prompt filters - code that executes when the prompt is being rendered.

Installation and integration with Clink steps are very straightforward:

  1. Download the latest release
  2. Move the executable to the %ConEmuBaseDir%\clink\profile\oh-my-posh\bin folder
  3. Add oh-my-posh.lua script to the %ConEmuBaseDir%\clink\profile folder
  4. Create a theme file (I named mine amanek.omp.json)

Following is the expected folder structure:

📂 %ConEmuBaseDir%\clink
  📂 profile
    📂 oh-my-posh
      📂 bin
        📦 oh-my-posh.exe <-- note, that I renamed the executable file to oh-my-posh.exe.
      🎨 amanek.omp.json
    📄 oh-my-posh.lua
  📄 clink.lua

The registration script was inspired by the Clink project readme file:

oh-my-posh.lua
1
2
3
4
5
6
7
8
9
10
11
-- oh-my-posh.lua
-- Taken from https://github.com/chrisant996/clink/blob/master/docs/clink.md#oh-my-posh

local ohmyposh_dir = __clink_profile_dir.."oh-my-posh/"
local ohmyposh_exe = __clink_profile_dir.."oh-my-posh/bin/oh-my-posh.exe"

local ohmyposh_prompt = clink.promptfilter(1)
function ohmyposh_prompt:filter(prompt)
    prompt = io.popen(ohmyposh_exe.." --config "..ohmyposh_dir.."amanek.omp.json --shell universal"):read("*a")
    return prompt, false
end

The Oh My Posh comes with a wide variety of prebuilt themes. The customization process is well described in the Override the theme settings documentation section.

Here is the link to my theme file - GitHub - amanek.omp.json. It includes the following sections:

  • Indicator of elevated prompt. Displays a lightning symbol if my console instance is running as Administrator.
  • Logged-in user name and computer name. I frequently connect to different machines over RDP, so it is good to know where am I right now 😊
  • Location path
  • Git status

One of the issues I encountered during the command prompt customization process is that ConEmu remaps console colors and replaces them with its own color scheme:

ConEmu Colors configuration section

As you can see, ConEmu uses a color scheme based on the 16 ANSI colors. Fortunately, in Oh My Posh, it is also possible to specify color using one of the well-known 16 color names (see standard colors documentation section). Here is the “ConEmu color number to Oh My Posh color name” conversion table:

Number   Color
0 black
1/4 blue
2 green
3/6 cyan
4/1 red
5 magenta
6/3 yellow
7 white
8 darkGray
9 lightBlue
10 lightGreen
11 lightCyan
12 lightRed
13 lightMagenta
14 lightYellow
15 lightWhite

Another thing that did not work for me right away was fonts. To render powerlines and icons, Oh My Posh requires the terminal to use a font that contains glyphs from the Nerd Fonts. Nerd Fonts readme contain links to the patched and supported fonts with permissive licensing terms.

The font that I prefer to use was not on the list, so I had to patch it manually. The process is following:

  1. Clone Nerd Fonts repository
  2. Install Font Forge and Python
  3. Copy font file to a separate folder
  4. In the terminal, navigate to the Nerd Fonts repo root and run the following command
1
<PATH_TO_FONT_FORGE>\fontforge.exe -script font-patcher --windows --complete <PATH_TO_FONT_FILE>

More detailed instructions here - Nerd Fonts | Patch Your Own Font.

The Nerd Fonts repository is heavy, and I recommend doing a shallow clone with –depth 1 option.

Conclusions

The amount of work people put into the open-source projects I mentioned is astounding. I was also pleasantly surprised with the quality of tools and customization options available for CMD. Never before my terminal window was so aesthetically pleasing and functionally rich. Now I feel more inspired to continue experimenting with my setup and hope that this guide helped to improve your console experience.

Good luck and happy hacking!

Why Your Team Should Do a Code Review on a Regular Basis

While working with any system we have to take into account so many aspects that even armed with the best tooling and extensive test suites we cannot guarantee 100% (but surely we can plan and minimize the risks) that the development/maintenance cost of our software will not exceed the amount of profit it generates.

And to minimize the risks and costs we have to follow the best practices, methodologies and techniques. My personal approach to aforementioned good practices is very pragmatic - if something proved that it could be used to simplify the life of the team we’ll use it. And one of that things that are often underestimated or neglected is a code (or peer) review.

In software development, peer review is a type of software review in which a work product (document, code, or other) is examined by its author and one or more colleagues, in order to evaluate its technical content and quality.

From Wikipedia, the free encyclopedia.

What Are The Benefits Of Doing A Code Review?

The main intent of the code review is to identify source code defects and quality issues. Another big advantage is a knowledge transfer. This is maybe the least expected outcome of the code review process. I personally observed lots of cases when reviewers were giving links and citing some external resources in their comments. That additional resources were helping author to get deeper into details, see the issue from different angles and as a result produce better code.

For those who like numbers I recommend to read a study by Bacchelli A. and Bird C. Expectations, outcomes, and challenges of modern code review that characterizes the motivations of developers and managers for code review and compares it with actual results.

Also Steve McConnell gives enough facts of code review effectiveness in Code Complete:

Technical reviews have been studied much longer than pair programming, and their results, as described in case studies and elsewhere, have been impressive:

  • IBM found that each hour of inspection prevented about 100 hours of related work (testing and defect correction) (Holland 1999).
  • Raytheon reduced its cost of defect correction (rework) from about 40 percent of total project cost to about 20 percent through an initiative that focused on inspections (Haley 1996).
  • Hewlett-Packard reported that its inspection program saved an estimated $21.5 million per year (Grady and Van Slack 1994).
  • Imperial Chemical Industries found that the cost of maintaining a portfolio of about 400 programs was only about 10 percent as high as the cost of maintaining a similar set of programs that had not been inspected (Gilb and Graham 1993).
  • A study of large programs found that each hour spent on inspections avoided an average of 33 hours of maintenance work and that inspections were up to 20 times more efficient than testing (Russell 1991).
  • In a software-maintenance organization, 55 percent of one-line maintenance changes were in error before code reviews were introduced. After reviews were introduced, only 2 percent of the changes were in error (Freedman and Weinberg 1990). When all changes were considered, 95 percent were correct the first time after reviews were introduced. Before reviews were introduced, under 20 percent were correct the first time.
  • A group of 11 programs were developed by the same group of people, and all were released to production. The first five were developed without reviews and averaged 4.5 errors per 100 lines of code. The other six were inspected and averaged only 0.82 errors per 100 lines of code. Reviews cut the errors by over 80 percent (Freedman and Weinberg 1990).
  • Capers Jones reports that of all the software projects he has studied that have achieved 99 percent defect-removal rates or better, all have used formal inspections. Also, none of the projects that achieved less than 75 percent defectremoval efficiency used formal inspections (Jones 2000).

How It Is Working?

The typical code review process is following:

  1. Author of change generates a patch and sends it to the code review system
  2. Author invites his teammates to review the code
  3. Code review participants are adding comments and suggestions on code improvement
  4. Author either follows the suggestions and updates the code or rejects them
  5. The code review is updated by author and a new review iteration is started
  6. When all debates around the change are finished the code review is approved and the change is merged into the repository.

Where To Start?

Code review is often supported by tools preferably integrated into the development environment. If you are working alone there is a site where you can ask for a peer programmer code review - Code Review. As like on a Stack Overflow this site has an army of active members that would happily help you no matter what language or technology you are using.

There are also a plenty of tools available on a market starting from aforementioned SO site to TFS support and integrated code review tooling inside GitHub, GitLab, Bitbucket and other OSS collaboration platforms.

Conclusions

If you are looking for how you can improve the state of the codebase and/or development process in general start practicing code review on daily basis.

Surely if the team never practiced code review before it would be harder to start but as Laozi said “The journey of a thousand miles begins with one step”. And I wish you to succeed!

Generate TypeScript Interfaces From .NET Assemblies Using T4 Templates

Introduction

When it comes to writing the HTML/JavaScript client for your (“your” here means you own the code or have direct access to the assemblies) web service there is one thing that bothers everyone - translating classes from .NET to JavaScript. The problem is that whenever your service contract changes you need to reflect this change in your client application. Yes, most of the time this is not the case when the service is already in production but when the client and the service are both being written at the same time I think you would agree that continuous changes in service contract are a common thing.

Another big issue - even if current service contract (read API version) is “frozen” and is not going to change in future you still have to manually translate all your .NET classes to the JavaScript. It is OK if you have a handful of classes, but can you imagine (or even recall) the pain of translating couple of dozens of C# classes to the JavaScript?

That’s it, that is why I’ve decided to share my approach to this issue of translating the .NET classes to JavaScript.

The Problem

Lets imagine the situation where we have two teams that are working on two projects - the server side and the client side. The server side is represented by ASP.NET WebAPI service and the client side is an HTML/JavaScript application. As the server project progresses client team notices that it continuously have to make little adjustments “here and there” to keep up to date with the WebAPI changes in its DTO classes. So the problem is to automate this tedious for both teams process.

As of writing this post I’ve found that there is a question on StackOverflow showing an interest to this topic - How to reuse existing C# class definitions in TypeScript projects.

The Solution

As always there are two ways of solving the problem - use existing solution or write a new one.

There are at least two tools are available - TypeLite and T4TS. Everything is good with these tools but when it came to customization it turned out that you need to decorate the classes with some fancy attributes or code transformation functions. This means that you should mix in the requirements like module/property naming convention to the classes that are not even aware of existence of some client project that indirectly depends on them.

You can call me a purist but hey, why would I need to keep the metadata required for one project in another? And why should I complicate things and instruct the team working with a server side of how to decorate the classes with attributes that are needed by other team? Simple things should be simple. I just want my C# classes/structs/enums to be transformed to the TypeScript interfaces/classes/enums.

From my experience when it comes to codegeneration most of the time you will not find the “ready for use” solution that will 100% satisfy you. Best case is that you’ll find something that is simple and easy to change.

So I’ve chosen the second path - hack my own solution. For DTOs I’ve decided to write a code generator based on T4 Text Templates and reflection. And since I have a TypeScript based project my code generation templates are producing TypeScript code. Why TypeScript? For me, the main reason is compile time errors. I like that I can have classes and interfaces which usage will be checked by compiler at development time and I will see the mistakes before I run the app. Also it is worth to mention that TypeScript supports almost all features of the the ECMAScript 6 which is also good because investing time in TypeScript now I will be up to date with the latest standard available.

I also strongly believe that it is critically important to run code generation on every build and have no auto-generated things committed in a source control. This approach minimizes the probability of mistakes made by engineers (yes, I’ve had experience when warnings like // This code was auto-generated were ignored).

Code

Since I’m going to generate classes I have to describe the metadata I need. This would be the name of the interface/enum and a list of its members:

Metadata Classes - MetadataModels.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
internal enum DtoTypeKind
{
  Interface,
  Enum,
  Class
}

internal class DtoType
{
  public string Name { get; set; }
  public DtoTypeKind Kind { get; set; }
  public IEnumerable<DtoMember> Members { get; set; }
}

internal class DtoMember
{
  public string Name { get; set; }
  public Type Type { get; set; }
}

MetadataHelper class is the heart of solution - it will extract the data needed for codegeneration using reflection:

MetadataHelper.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
internal static class MetadataHelper
{
  public static DtoType[] GetDtoTypesMetadata(IEnumerable<Type> types)
  {
      return types
          .Where(t => !t.IsAbstract) // We are not interested in abstract classes
          .Where(t => t.GetCustomAttribute<DataContractAttribute>() != null)
          .Select(t => new DtoType
          {
              Name = t.Name,
              // struct => interface
              // class => class
              // enum => enum. Must check for enum first because it is a ValueType and we want to avoid enums to be generaed as interfaces
              Kind = t.IsEnum
                      ? DtoTypeKind.Enum
                      : t.IsValueType
                          ? DtoTypeKind.Interface
                          : DtoTypeKind.Class,
              Members = t.IsEnum // For enum types we should get its values except the "value__" field
                  ? t.GetFields()
                      .Where(f => f.GetCustomAttribute<DataMemberAttribute>() != null && f.Name != "value__")
                      .Select(f => new DtoMember
                      {
                          Name = f.Name,
                          Type = f.FieldType
                      })
                  : t.GetProperties(BindingFlags.Public | BindingFlags.Instance)
                      .Where(p => p.GetCustomAttribute<DataMemberAttribute>() != null)
                      .Select(p => new DtoMember
                      {
                          Name = p.Name,
                          Type = p.PropertyType
                      })
          })
          .ToArray();
  }
}

To be able to parameterize and run codegeneration on every build I’m using preprocessed T4 templates (for more information on topic please refer to the Oleg Sych’s Understanding T4: Preprocessed Text Templates blog post). Preprocessed template generates a partial class that I’ll be able to extend with metadata I need:

1
2
3
4
5
6
7
internal partial class TypesGenerator
{
  public DtoType[] DtoTypes { get; set; }

  private IEnumerable<DtoType> Interfaces { get { return DtoTypes.Where(t => t.Kind == DtoTypeKind.Class || t.Kind == DtoTypeKind.Interface); } }
  private IEnumerable<DtoType> Enums { get { return DtoTypes.Where(t => t.Kind == DtoTypeKind.Enum); } }
}

And the actual template. It also contains a helper method that translates .NET types to the corresponding TypeScript type names.

T4 Template - TypesGenerator.tt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<#@ template language="C#" visibility="internal" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Collections.Generic" #>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version: <#= Environment.Version #>
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

"use strict";

// Interfaces
<# foreach(var @interface in this.Interfaces) { #>

export interface <#= @interface.Name #> {

<#    foreach(var member in @interface.Members) { #>
  <#= member.Name#>?: <#= GetTypeScriptFieldTypeName(member.Type) #>;
<#    } #>
}
<# } #>


// Enums
<# foreach(var @enum in this.Enums) { #>

export enum <#= @enum.Name #> {
<#    foreach(var member in @enum.Members) { #>
  <#= member.Name #> = <#= (int)Enum.Parse(member.Type, member.Name) #>,
<#    } #>
}
<# } #>
<#+
  /// <summary>
  /// Returns a corresponding TypeScript type for a given .NET type
  /// </summary>
  public static string GetTypeScriptFieldTypeName(Type type)
  {
      var numberTypes = new HashSet<Type>
      {
          typeof(sbyte), typeof(byte), typeof(short),
          typeof(ushort), typeof(int), typeof(uint),
          typeof(long), typeof(ulong), typeof(float),
          typeof(double), typeof(decimal)
      };
      var stringTypes = new HashSet<Type>
      {
          typeof(char), typeof(string), typeof(Guid)
      };

      var result = "";
      var isCollectionType = false;
      // Check if it is a generic. We support only generics which are compatible with IEnumerable<T> and have only one generic argument
      if (type.IsGenericType) {
          if (!typeof(IEnumerable<object>).IsAssignableFrom(type) && type.GetGenericArguments().Length > 1) {
              throw new Exception(string.Format("The generic type {0} must implement IEnumerable<T> and must have no more than 1 generic argument.", type.FullName));
          }
          // strip the generic type leaving the first generic argument
          type = type.GetGenericArguments()[0];
          isCollectionType = true;
      }

      // Check if it is a primitive type
      if (numberTypes.Contains(type)) result = "number";
      else if (stringTypes.Contains(type)) result = "string";
      else if (type == typeof(bool)) result = "boolean";
      // It is enum/class/struct -> return its name as-is
      else result = type.Name;

      if(isCollectionType) result += "[]";

      return result;
  }
#>

And the usage is very simple. I’ve created a console application which could be launched for example on CI server during the build.

1
2
3
4
5
6
7
8
9
10
11
12
static void Main(string[] args)
{
  if (args.Length == 0 || args[0].IndexOfAny(Path.GetInvalidPathChars()) >= 0)
  {
      throw new ArgumentException("Invalid argument. First argument should be a valid file path.");
  }

  var fileName = args[0];
  var typesMetadata = MetadataHelper.GetDtoTypesMetadata(typeof(Todo).Assembly.ExportedTypes);
  var typesGenerator = new TypesGenerator { DtoTypes = typesMetadata };
  File.WriteAllText(fileName, typesGenerator.TransformText().Trim());
}

Conclusion

As you can see with a very little effort I’ve got a working and open to any customizations codegenerator. As always the code from this post is available on Github. Feel free to clone and adjust to your needs.

Keep the code simple!

How I’ve Fixed My Dell Inspiron Overheating Issues

Last summer I started experiencing issues when working on CPU bound tasks on my laptop. At first I thought that the main cause was the summer heat - it was 30°C (86°F) at the time when I first noticed my laptop automatically shut down because of overheating. But then when temperature went down and occasional shutdowns didn’t stopped I understood that I have a real problem.

Dell Inspiron N5110 I own Dell Inspiron N5110 which has Intel Core i7-2670QM CPU and NVidia GeForce GT 525M dedicated GPU. Browsing over the Internet showed that I’m not the only one with such issue. But there was no consistent/believable explanation/guide of why laptop started overheating and how to fix it. One part of the community was just blaming Dell’s greediness and/or cooling system which was not designed for such powerful CPU as i7 and another was suggesting to replace the thermal grease and through the power management controls decrease max speed of the CPU. I already knew how to disassemble my laptop (previously I had to replace my stock HDD which is not fast or easy operation when you own a Dell laptop) so I’ve decided to replace the thermal grease first and then try to understand and maybe even fix engineering blunders in cooling system.

Running ahead of the story I want to say that I’ve successfully accomplished both tasks and reduced overall temperature of my CPU by 20°C (68°F) resulting in stable 50°C (122°F) when idle and 85-90°C (185-194°F) under continuous 100% load.

Step 1: Clean the dust and replace the thermal grease

The things you’ll need:

  • The thermal grease. For those who is interested I was using Zalman ZM-STG2.
  • The Dell Inspiron N5110 Service Manual. This is a “must” if you never saw the “innards” of your laptop. You’ll have to follow the steps from “Removing the Thermal-Cooling Assembly” section (see page 75). Friendly tip: print pages with necessary steps as it is hard to remember everything when disassembling laptop for the first time.
    I’m sure that there are also plenty of video guides showing how to do this, but being a bit of old school I prefer reading over watching so I cannot recommend any video guide.

It turned out that stock thermal grease became rock solid and was no longer able to do its work. I used a 70% isopropyl alcohol to remove it.

Rock solid thermal grease on CPU and GPU

The fan was also full of dirt. The sad fact is that you cannot open fan case without removing the whole cooling system. This means that every time you want to clean it from dirt and dust you’ll have to replace the thermal grease.

Dirt inside cooling fan

So after I’ve replaced the thermal grease and cleaned the fan the CPU temperature decreased by around 15°C (59°F). That was a big win.

Step 2: Fix the airflow inside the cooling system

After two weeks I decided to try to make air flow inside the laptop more streamlined. First thing I did - closed the hole in motherboard with a piece of thick paper. The idea was to minimize the amount of hot air going under my keyboard which sometimes was making it too hot to work with it normally.

Dell Inspiron N5110

Secondly I decided to fix the air intake. From my point of view it has two issues:

  1. For some reason a piece of plastic was covering the 25% of the air intake grid. So I’ve just cut it away with the paper knife.

Plastic cover over air intake Plastic cover over air intake removed 2. There was a gap of 7mm (~0.25") between the motherboard and the grid so I’ve made a compactor from the little piece of linoleum. I’m sure something thick enough like a piece of foam rubber would also work as the idea is to streamline the air intake and do not allow the hot air from the laptop to be taken again.

A piece of linoleum that

I just glued the pieces of linoleum to the laptop case and made something that looks like a well.

Linoleum compactor applied

This gave me a little improvement of around 4-5°C (41°F). Not much but still better than nothing.

Conslusions

Replacement of the thermal grease and cleaning of the fan from dust is a must if you want to fix the overheating issues. Attempts to improve the air intake could also help to lower the temperature but not much.

Anyways you will not lose anything from trying to make things better.

Good luck!

How to Configure ComEmu Task for GitHub for Windows Portable Git

2/19/2015 Update: I’ve decided that it would be good to propose the change described in this post to the msysgit project. And today it was accepted and merged. It took me only 7 months to come up with idea that the change described below could be included into the official release of the software that I’m using on a daily basis :)

Maybe a year or something ago I switched from Console2 to ConEmu. One of the reasons behind this switch was a Task concept that ConEmu offered.

There was only one problem with my tasks setup - I wanted to launch Portable Git which is a part of GitHub for Windows installation inside ConEmu. But launching the git-cmd.bat from ConEmu will create a new window.

As you may know Portable Git binaries are located in %LOCALAPPDATA%\GitHub\PortableGit_054f2e797ebafd44a30203088cd3d58663c627ef\ Note that the last part of the directory name is a version string so it could change in future.

The problem lies in the last line of the git-cmd.bat file:

git-cmd.bat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@rem Do not use "echo off" to not affect any child calls.
@setlocal

@rem Get the abolute path to the current directory, which is assumed to be the
@rem Git installation root.
@for /F "delims=" %%I in ("%~dp0") do @set git_install_root=%%~fI
@set PATH=%git_install_root%\bin;%git_install_root%\mingw\bin;%git_install_root%\cmd;%PATH%

@if not exist "%HOME%" @set HOME=%HOMEDRIVE%%HOMEPATH%
@if not exist "%HOME%" @set HOME=%USERPROFILE%

@set PLINK_PROTOCOL=ssh
@if not defined TERM set TERM=msys

@cd %HOME%
@start %COMSPEC%

To fix the issue replace last line @start %COMSPEC% with @call %COMSPEC%.

This change will not break the existing “Open in Git Shell” context action in GitHub application GUI.

The difference between start and call commands is that call runs the batch script inside the same shell instance while start creates a new instance. Here is a little fragment from start and call help:

1
2
3
4
5
C:\>call /?
Calls one batch program from another.

C:\>start /?
Starts a separate window to run a specified program or command.

That’s it! Now following task for ConEmu will work as expected:

*cmd /k Title Git & "%LOCALAPPDATA%\GitHub\PortableGit_054f2e797ebafd44a30203088cd3d58663c627ef\git-cmd.bat"

Automate Your Dev Environment Setup

Every time I need to install and configure developer environment on a fresh OS (either on real or virtual machine) I feel irritated by the fact that I need to spend almost all my day just clicking around various installation dialogs confirming destination folders, accepting user agreements (that I can bet no one even tried to read fully) and performing other repetitive and almost pointless tasks.

I’m developer, I’m creating things (or at least trying to), so why would I waste my time doing dull and pointless work?! Ah, and why should I keep in mind (or in notepad, “installs” folder, etc.) a list of my tools and installation packages?

But honestly, I just cannot resist or give a single reason why this shouldn’t be automated. Said it - did it. And here are my adventures.

Let’s get Chocolatey?

Maybe you’ve heard about Chocolatey. In short this tool is like apt-get but for Windows and it is built on top of NuGet.

For those who are not familiar with NuGet and all variety of tools around it take a look at An Overview of the NuGet Ecosystem article by Xavier Decoster.

For a quick Chocolatey overview I can recommend Scott Hanselman’s post Is the Windows user ready for apt-get?

As of time of writing Chocolatey had 1,244 unique packages which is pretty cool - it is really hard to find package that does not exist there.

After a little search it appeared that I can even install Visual Studio with Chocolatey. Okay, cool, let’s do this.

No Battle Plan Survives Contact With the Enemy

I tried to install my first package on a fresh Windows 8 virtual machine and failed on the very first step. Jumping ahead of the story partially that was my fail but let’s roll on.

I wanted no more no less - install Visual Studio 2013 Ultimate Preview and see its new shining features for web devs. As described on site I installed Chocolatey and run cinst VisualStudio2013Ultimate command. Package downloaded, and .NET 4.5.1 installation started. Boom! I got my first error:

1
[ERROR] Exception calling "Start" with "1" argument(s): "The operation was canceled by the user"

Chocolatey .NET 4.5.1 installation error

After some research it appeared that by default Windows 8 processes are not launched with administrator privileges (even if current user is member of Administrator group) and because of silent installation mode (read “non-UI mode”) UAC prompt was not showed and attempt to elevate rights was cancelled by default. To fix this issue I had to disable UAC notifications. I have spent quite time searching the cause of my issue and decided to table VS 2013 for now and proceed with installation of the Visual Studio 2012 instead.

To install 90 day trial of Visual Studio 2012 Ultimate I run cinst VisualStudio2012Ultimate command and after a little pause and some blinking of standard installation dialog another crazy error appeared:

1
blah-blah-blah. Exit code was '-2147185721'

Chocolatey VS 2012 installation error

Thankfully, I have experience with silent installations of Visual Studio and I have a link to Visual Studio Administrator Guide in my bookmarks which contains a list of exit codes for installation package. -2147185721 code is ”Incomplete - Reboot Required”. That sounded logically. /NoRestart switch in VS chocolatey install script setup was automatically cancelled and returned non-zero value which was treated as error. Okay, rebooted the machine.

But this was not my last error :). After reboot using -force parameter I resumed installation process of the Visual Studio and got my next error (extracted from installation log vs.log file):

1
2
3
[0824:0820][2013-09-14T12:56:04]: Applied execute package: vcRuntimeDebug_x86, result: 0x0, restart: None
[082C:09C4][2013-09-14T12:56:04]: Registering dependency: {ae17ae9b-af38-40d2-a194-6102c56ed502} on package provider: Microsoft.VS.VC_RuntimeDebug_x86,v11, package: vcRuntimeDebug_x86
[082C:0850][2013-09-14T12:56:12]: Error 0x80070490: Failed to find expected public key in certificate chain.

The last words from “chocolatey gods” were Exit code was '1603'.

This time nothing came to my mind except trying to install updates on Windows first (words “certificate chain” lead me to this idea). As it turned out that was the case and my great mistake not to install updates first.

Moral: never try to install something serious unless you have all updates for your OS installed.

After all these errors I decided to rollback my virtual machine back to the initial state and start from scratch. This time I installed all Windows updates and after I finished all Chocolatey packages were installed without any errors.

Share all the scripts!

Share all the scripts!

After I finished with my journey I decided that it would be great to keep my scripts in one place and have a possibility to share them. I cannot find any better service for this but Github. Now I can share my scripts, update them, have a history of changes, make tags and special branches for some specific setups. Isn’t this great and how it should be?

Go fork my repository and start making your life easier!

Conclusions

Here I did only first steps on the road to the bright future of the automated environment setup. And while we can use Chocolatey to save time with installations we still need to configure the stuff. Of course if you are using default settings this is not a problem but unfortunatelly this is not my case ;)

I think in my next post I will share my experience in automated configurations trasferring.

Configuring Web Forms Routing With Custom Attributes

1/13/2013 Update: Now PhysicalFile property is filled and updated automatically, using T4 template. Say good-bye to issues caused by typos and copy-pasting.

Recently I had to add routing to existent ASP.NET Web Forms application. I was (and I suppose I’m still) new to this thing so I started from Walkthrough: Using ASP.NET Routing in a Web Forms Application and it seemed fine until I started coding.

The site was nothing special but approximately 50 pages. And when I started configuring all these pages it felt wrong - I was lost in all these route names, defaults and constraints. If it felt wrong, I thought, why not to try something else. I googled around and found a pretty good thing - ASP.NET FriendlyUrls. Scott Hanselman wrote about this in his Introducing ASP.NET FriendlyUrls - cleaner URLs, easier Routing, and Mobile Views for ASP.NET Web Forms post. At first glance it looked far easier and better, but I wanted to use RouteParameters for my datasource controls on pages. ASP.NET FriendlyUrls are providing only “URL segment” concept - string that could be extracted from URL (string between ‘/’ characters in URL). URL Segments could not be constrained and thus automatically validated. Also, segments could not have names, so my idea to use RouteParameter would be killed if I’d go with ASP.NET FriendlyUrls.

At the end of this little investigation I thought that it would be easier to tie together route configuration with page class via custom attribute and conventionally named properties for defaults and constraints. So every page class gets its routing configuration as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
namespace RoutingWithAttributes.Foo
{
 [MapToRoute(RouteUrl = "Foo/Edit/{id}")]
  public partial class Edit : Page
  {
      public static RouteValueDictionary Defaults
      {
          get
          {
              return new RouteValueDictionary { { "id", "" } };
          }
      }

      public static RouteValueDictionary Constraints
      {
          get
          {
              return new RouteValueDictionary { { "id", "^[0-9]*$" } };
          }
      }
  }
}

The code above states that Edit page in folder Foo of my RoutingWithAttributes web application will be accessible through http://<application-url>/Foo/Edit hyperlink with optional id parameter. Default value for id parameter is empty string but it should be integer number if provided.

For me this works better, it is self describing and I’m not forced to go to some App_Start\RoutingConfig.cs file and search for it. Now how it is working under the hood? Nothing new or special - just a bit of reflection on Application_Start event. And routes are still registered with RouteCollection.MapPageRoute method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
protected void Application_Start(object sender, EventArgs e)
{
  RouteConfig.RegisterRoutes(RouteTable.Routes);
}

public class RouteConfig
{
  public static void RegisterRoutes(RouteCollection routes)
  {
      var mappedPages = Assembly.GetAssembly(typeof (RouteConfig))
              .GetTypes()
              .AsEnumerable()
              .Where(type => type.GetCustomAttributes(typeof (MapToRouteAttribute), false).Length == 1);

      foreach (var pageType in mappedPages)
      {
          var defaultsProperty = pageType.GetProperty("Defaults");
          var defaults = defaultsProperty != null ? (RouteValueDictionary)defaultsProperty.GetValue(null, null) : null;

          var constraintsProperty = pageType.GetProperty("Constraints");
          var constraints = constraintsProperty != null ? (RouteValueDictionary)constraintsProperty.GetValue(null, null) : null;

          var dataTokensProperty = pageType.GetProperty("DataTokens");
          var dataTokens = dataTokensProperty != null ? (RouteValueDictionary)dataTokensProperty.GetValue(null, null) : null;

          var routeAttribute = (MapToRouteAttribute)pageType.GetCustomAttributes(typeof(MapToRouteAttribute), false)[0];

          if(string.IsNullOrEmpty(routeAttribute.RouteUrl))
              throw new NullReferenceException("RouteUrl property cannot be null");

          if (string.IsNullOrEmpty(routeAttribute.PhysicalFile))
              throw new NullReferenceException("PhysicalFile property cannot be null");

          if(!VirtualPathUtility.IsAppRelative(routeAttribute.PhysicalFile))
              throw new ArgumentException("Property should be application relative URL", "PhysicalFile");

          routes.MapPageRoute(pageType.FullName, routeAttribute.RouteUrl, routeAttribute.PhysicalFile, true, defaults, constraints, dataTokens);
      }
  }
}

Route name is equal to the FullName property of page type. Since Type.FullName includes both namespace and class name it guarantees route name uniqueness across the application.

To utilize route links generation I had to create two extension methods for Page class. These methods are just wrappers for Page.GetRouteUrl method.

1
2
3
4
5
6
7
8
9
10
11
12
public static class PageExtensions
{
  public static string GetMappedRouteUrl(this Page thisPage, Type targetPageType, object routeParameters)
  {
      return thisPage.GetRouteUrl(targetPageType.FullName, routeParameters);
  }

  public static string GetMappedRouteUrl(this Page thisPage, Type targetPageType, RouteValueDictionary routeParameters)
  {
      return thisPage.GetRouteUrl(targetPageType.FullName, routeParameters);
  }
}

So now I can generate link to Foo.Edit page as follows:

1
    <a href='<%= Page.GetMappedRouteUrl(typeof(RoutingWithAttributes.Foo.Edit), new { id = 1 }) %>'>Foo.Edit</a>

And it will produce http://<application-url>/Foo/Edit/1 link.

Described approach helped me to accomplish task without frustration and I’m satisfied with the results.

Code for this article is hosted on GitHub feel free to use it if you liked the idea.

Improve Your Reading Experience With Instapaper, Calibre and Command Line

After I read Scott Hanselman’s post ”Instapaper delivered to your Kindle changes how you consume web content - Plus IFTTT, blogs and more” I bethought that I wanted to create an automated Instapaper to my e-book reader “contend delivery system”. Now as I finished here is my little story.

LBook V5Almost a year ago when I started using Instapaper I realized that it would be great to grab all articles that were collected through the week, convert them to EPUB format and send electronic book to my e-book reader device. The only problem was in my device - Lbook V5. Yes, it is totally outdated and old comparing to Kindle devices. It supports EPUB but does not have access to the Internet, so Instapaper “download” feature doesn’t work for me.

A few month ago I found Calibre - free and open source e-book library management application. It helped me to organize and manage all my electronic library and I’m totally happy with it. Calibre has everything that could be possibly needed - scheduler support, custom news source with interactive setup and converters to various e-book formats. But what most interesting and important Calibre has command line ebook-convert.exe utility which could be driven by recipe files. Recipes in Calibre are just Python scripts (with a bits of custom logic if it is needed to parse some specific news source).

Below is simple Calibre recipe:

1
2
3
4
5
6
7
class AdvancedUserRecipe1352822143(BasicNewsRecipe):
  title          = u'Custom News Source'
  oldest_article = 7
  max_articles_per_feed = 100
  auto_cleanup = True

  feeds = [(u'The title of the feed', u'http://somesite.com/feed')]

This defines RSS feed source at http://somesite.com/feed and declares that there should be no more than 100 articles not older than 7 days. If we’ll use it with ebook-convert utility, it will automatically fetch news from specified feed and will generate e-book file. The command line to generate book is following:

1
ebook-convert.exe input_file output_file [options]

When input_file parameter is recipe ebook-convert runs it and then produces e-book in specified by output_file parameter format. Recipe should populate feeds dictionary so ebook-convert will know what XML feeds should be processed. Options could accept two parameters - username and password (correct me if I’m wrong but I didn’t found any information about possibility to use other/custom parameters). That was a brief introduction to Calibre recipe files. Now here is the problem.

Calibre has built in Instapaper recipe. This recipe was created by Stanislav Khromov with Jim Ramsay. Recipe has two versions - stable (it is part of current Calibre release) and development version, both could be found on BitBucket.

The development version of Instapaper recipe does almost what I want, but I needed to extend its functionality including:

  • Grab articles from all pages inside one directory (yes, sometimes it happens, when I’m not reading Instapaper articles for a few weeks).
  • Merge articles from certain directories into one book.
  • Archive all items in directories. This actually implemented in development version, but instead of using “Archive All…” form recipe emulates clicking on “Move to Archive” button which takes a lot of time to process all items.

At first I decided to extend development version of the mentioned above recipe but after I wasted an hour trying to beat the Python I realized that I can write command line utility in .NET (where I feel myself very comfortable) which will do whatever I want and I will save a ton of time (I’m definitely not going to learn Python just to change/fix one Calibre recipe :)). So here is InstaFeed - little command line utility that can enumerate names of Instapaper directories, generate single RSS feed for specified list of directories and archive them all at once. It uses two awesome open-source projects - Html Agility Pack and Command Line Parser Library.

Note: While this utility parses Instapaper HTML and produces RSS you can probably bypass “RSS limits” of Instapaper non-subscription accounts. But I encourage you to support this service. Cheating is not good at all, please respect Marco Arment’s work and efforts he put in this awesome service.

Having the command line utility that produces locally stored RSS feeds the only thing that remains is to create simple Calibre Recipe for ebook-convert utility. The recipe should be parameterized with path to RSS feed generated by InstaFeed. Here is the code:

1
2
3
4
5
6
7
8
9
10
11
class LocalRssFeed(BasicNewsRecipe):
  title        = u'local_rss_feed'
  oldest_article    = 365
  max_articles_per_feed    = 100
  auto_cleanup    = True
  feeds = None

  def get_feeds(self):
      # little hack that allows passing path to local RSS feed as a parameter via command line
      self.feeds = [u'Instapaper Unread', 'file:///' + self.username]
      return self.feeds

All custom recipes should be stored within Calibre Settings\custom_recipes folder.

Note: Everything in this post applies to Portable 0.8.65.0 version of Calibre for Microsoft Windows. I have no idea whether it will work for other versions or installation variants.

Below is sources for batch file that produces RSS feed from Read Later Instapaper directory and then generates e-book in EPUB format at C:\Temp. I run this batch weekly via Windows Task Scheduler.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@echo off
setlocal EnableDelayedExpansion
setlocal EnableExtensions

:: change path to your calibre and instafeed executables
set _instafeeddir=F:\util\instafeed\
set _calibredir=F:\util\Calibre Portable\

:: set output directory and naming convention here
set filename=C:\Temp\[%date:/=%]_instapaper_unread_articles
set rssfile=%filename%.xml
set ebookfile=%filename%.epub

%_instafeeddir%instafeed.exe -c rss -u <instapaper_username> -p <instapaper password> -d "Read Later" -o "%rssfile%"
%_calibredir%\Calibre\ebook-convert.exe "%_calibredir%\Calibre Settings\custom_recipes\local_rss_feed.recipe" "%ebookfile%" --username="%rssfile%"

endlocal

I had fun writing InstaFeed and digging in Calibre recipes and hope that someone will benefit from my experience. What else could be said? Read with convenience and have fun!

Adding Client-Side Validation Support for PhoneAttribute or Fighting the Lookbehind in JavaScript

Today, I was working on JavaScript implementation of validation routine for PhoneAttribute in context of my hobby project DAValidation. Examining the sources of .NET 4.5 showed that the validation is done via regular expression:

Unsupported lookbehind part of phone validation regexp pattern

And here is the problem - the pattern uses lookbehind feature that is not supported in JavaScript. Quote from regular-expressions.info:

Finally, flavors like JavaScript, Ruby and Tcl do not support lookbehind at all, even though they do support lookahead.

This lookbehind is used to match the “+” sign at the beginning of string, i. e. check the existence of the prefix. To make this work in JavaScript pattern should be reversed and lookbehind assertion should be replaced with lookahead (replace prefix check to suffix). And that’s it! The resulting pattern is:

1
^(\d+\s?(x|\.txe?)\s?)?((\)(\d+[\s\-\.]?)?\d+\(|\d+)[\s\-\.]?)*(\)([\s\-\.]?\d+)?\d+\+?\((?!\+.*)|\d+)(\s?\+)?$

As a proof here is test html page:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<html>
    <head>
        <title>Phone Number RegExp Test Page</title>
    </head>
    <body>
        <script>
            function validateInput() {
                var phoneRegex = new RegExp("^(\\d+\\s?(x|\\.txe?)\\s?)?((\\)(\\d+[\\s\\-\\.]?)?\\d+\\(|\\d+)[\\s\\-\\.]?)*(\\)([\\s\\-\\.]?\\d+)?\\d+\\+?\\((?!\\+.*)|\\d+)(\\s?\\+)?$", "i");

                var input = document.getElementById("tbPhone");
                var value = input.value.split("").reverse().join("");
                alert(phoneRegex.test(value));
            }
        </script>

        <input type="text" id="tbPhone" />
        <button onclick="javascript:testPhone()">Validate</button>
    </body>
</html>

While working on pattern reversing I was using my favorite regular expressions building and testing tool Expresso. Also, a great article of Steven Levithan Mimicking Lookbehind in JavaScript helped to look deeper and actually find the right solution of the problem.

PS. Now, as I finally finished adding support for new .NET 4.5 validation attributes the new version of DAValidation will be published soon. Stay tuned ;)

How to Implement Configurable Dynamic Data Filters in ASP.NET 4.5

Every time, when we speaking about data driven web applications there is a task of providing data filtering feature or configurable filters with ability to save the search criteria individually for each user. The most convenient filtering experience I have ever encountered were the bug tracking systems. Fast and simple. To get the idea of what I’m talking about just look at Redmine Issues page. Can we implement something similar with pure ASP.NET, particularly with ASP.NET Dynamic Data? Why Dynamic Data? Because of its focus on metadata which is set by attributes from DataAnnotations namespace and convention over configuration approach for building data driven applications. Its simple and convenient, and does not take much efforts to extend it.

For filtering Dynamic Data offers us Filter Templates with FilterRepeater control. To get the idea of how Dynamic Data Filter Templates are working I highly recommend reading a great post of Oleg Sych “Understanding ASP.NET Dynamic Data: Filter Templates”.

Until .NET 4.5 there were no extension points where we could retake control over filter templates creation. And surprisingly, I found that interface IFilterExpressionProvider.aspx) became public in .NET 4.5. So now we can extend Dynamic Data filtering mechanism.

ASP.NET Dynamic Data QueryableFilterRepeater

For the jump start lets remind how List PageTemplate in Dynamic Data looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<asp:QueryableFilterRepeater runat="server" ID="FilterRepeater">
  <ItemTemplate>
      <asp:Label runat="server" Text='<%# Eval("DisplayName") %>' OnPreRender="Label_PreRender" />
      <asp:DynamicFilter runat="server" ID="DynamicFilter" OnFilterChanged="DynamicFilter_FilterChanged" /><br />
  </ItemTemplate>
</asp:QueryableFilterRepeater>

<asp:GridView ID="GridView1" runat="server" DataSourceID="GridDataSource" >
<%-- Contents and styling omited for brevity --%>
</asp:GridView>

<asp:EntityDataSource ID="GridDataSource" runat="server" EnableDelete="true" />

<asp:QueryExtender TargetControlID="GridDataSource" ID="GridQueryExtender" runat="server">
  <asp:DynamicFilterExpression ControlID="FilterRepeater" />
</asp:QueryExtender>

The purpose of QueryableFilterRepeater is to generate set of filters for a set of columns. It should contain DynamicFilter control which is the actual placeholder for a FilterTemplate control. QueryableFilterRepeater implements IFilterExpressionProvider interface that is supported by QueryExtender via DynamicFilterExpression control.

1
2
3
4
5
public interface IFilterExpressionProvider
{
  IQueryable GetQueryable(IQueryable source);
  void Initialize(IQueryableDataSource dataSource);
}

The complete call sequence is represented on diagram below.

Sequence diagram showing QueryExtender interaction with Dynamic Data controls

Building Configurable Alternative to QueryableFilterRepeater

As QueryableFilterRepeater is creating filters automatically, the only thing we can do is to hide DynamicFilter on client- or on server-side. To my mind it is not good idea, so a custom implementation of IFilterExpressionProvider is needed. It should support the same item template model as in QueryableFilterRepeater but with ability to add/remove filter controls between postbacks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[ParseChildren(true)]
[PersistChildren(false)]
public class DynamicFilterRepeater : Control, IFilterExpressionProvider
{
  private readonly List<IFilterExpressionProvider> filters = new List<IFilterExpressionProvider>();
  private IQueryableDataSource dataSource;

  IQueryable IFilterExpressionProvider.GetQueryable(IQueryable source)
  {
      return filters.Aggregate(source, (current, filter) => filter.GetQueryable(current));
  }

  void IFilterExpressionProvider.Initialize(IQueryableDataSource queryableDataSource)
  {
      Contract.Assert(queryableDataSource is IDynamicDataSource);
      Contract.Assert(queryableDataSource != null);

      if (ItemTemplate == null)
          return;
      dataSource = queryableDataSource;

      Page.InitComplete += InitComplete;
      Page.LoadComplete += LoadCompeted;
  }
}

The only disappointing thing is the content generation of DynamicFilter which is done on Page.InitComplete event.

Oleg Sych tried to change the situation, but his suggestion is closed now and seems nothing will be changed. I just reposted his suggestion on visualstudio.uservoice.com in hope that this time, we will succeed.

To make things working, DynamicFilter control should initialize itself via EnsureInit method which is generally speaking responsible for FitlerTempate lookup and loading. In other words to force the DynamicFilter to generate its content this method should be called. The only way to do it is to use reflection, since EnsureInit is private.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
private static readonly MethodInfo DynamicFilterEnsureInit;

static DynamicFilterRepeater()
{
  DynamicFilterEnsureInit = typeof (DynamicFilter).GetMethod("EnsureInit", BindingFlags.NonPublic | BindingFlags.Instance);
}

private void AddFilterControls(IEnumerable<string> columnNames)
{
  foreach (MetaColumn column in GetFilteredMetaColumns(columnNames))
  {
      DynamicFilterRepeaterItem item = new DynamicFilterRepeaterItem { DataItemIndex = itemIndex, DisplayIndex = itemIndex };
      itemIndex++;
      ItemTemplate.InstantiateIn(item);
      Controls.Add(item);

      DynamicFilter filter = item.FindControl(DynamicFilterContainerId) as DynamicFilter;
      if (filter == null)
      {
          throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture,
              "FilterRepeater '{0}' does not contain a control of type '{1}' with ID '{2}' in its item templates",
              ID,
              typeof(QueryableFilterUserControl).FullName,
              DynamicFilterContainerId));
      }
      filter.DataField = column.Name;

      item.DataItem = column;
      item.DataBind();
      item.DataItem = null;

      filters.Add(filter);
  }

  filters.ForEach(f => DynamicFilterEnsureInit.Invoke(f, new object[] { dataSource }));
}

private IEnumerable GetFilteredMetaColumns(IEnumerable filterColumns)
{
  return MetaTable.GetFilteredColumns()
      .Where(column => filterColumns.Contains(column.Name))
      .OrderBy(column => column.Name);
}

private class DynamicFilterRepeaterItem : Control, IDataItemContainer
{
  public object DataItem { get; internal set; }
  public int DataItemIndex { get; internal set; }
  public int DisplayIndex { get; internal set; }
}

Another problem that should be solved - filter controls instantiation. As it was pointed before, all things in Dynamic Data that are connected with filtering are initialized at Page.InitCompleted event. And if you want your dynamic filters to work, they should be instantiated before or at InitComplete event. So far I see only one way to solve this - method AddFilterControls should be called twice, first time to instantiate filter controls that were present on the page (InitComplete event) and second time for newly added columns that are to be filtered (LoadComplete event).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private void InitComplete(object sender, EventArgs e)
{
  if (initComleted)
      return;

  addedOnInitCompleteFilters.AddRange(FilterColumns);
  AddFilterControls(addedOnInitCompleteFilters);

  initComleted = true;
}

private void LoadCompeted(object sender, EventArgs eventArgs)
{
  if (loadCompleted)
      return;

  AddFilterControls(FilterColumns.Except(addedOnInitCompleteFilters));

  loadCompleted = true;
}

Encapsulating DynamicFilterRepeater

DynamicFilterRepeater is only a part of more general component though. Everything it does is rendering of filter controls and providing of filter expression. But to start working, DynamicFilterRepeater needs two things - IQueryableDataSource and list of columns to be filtered. Since filtering across the website should be consistent and unified it would be good to encapsulate DynamicFilterRepeater in UserControl which will serve as HTML layout and a glue between page (with IQueryableDataSource, QueryExtender and data source bound control) and DynamicFilterRepeater. In my example I chose GridView.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<asp:Label runat="server" Text="Add fitler" AssociatedControlID="ddlFilterableColumns" />
<asp:DropDownList runat="server" ID="ddlFilterableColumns" CssClass="ui-widget"
  AutoPostBack="True"
  ItemType="<%$ Code: typeof(KeyValuePair<string, string>) %>"
  DataValueField="Key"
  DataTextField="Value"
  SelectMethod="GetFilterableColumns"
  OnSelectedIndexChanged="ddlFilterableColumns_SelectedIndexChanged">
</asp:DropDownList>

<input type="hidden" runat="server" ID="FilterColumns" />
<dd:DynamicFilterRepeater runat="server" ID="FilterRepeater">
  <ItemTemplate>
      <div>
          <asp:Label ID="lblDisplayName" runat="server"
              Text='<%# Eval("DisplayName") %>'
              OnPreRender="lblDisplayName_PreRender" />
          <asp:DynamicFilter runat="server" ID="DynamicFilter" />
      </div>
  </ItemTemplate>
</dd:DynamicFilterRepeater>

Remember I have mentioned about two-stage filter controls instantiation and a storage for list of filtered columns? Yes, this user control is a place where list of filtered columns could be stored. To get list of filtered columns before Page.InitComplete event I’m using a little trick - the hidden input field serves as a storage for filtered columns list. Enforcing hidden input to have its ID generated on server makes it possible to retrieve value directly from Page.Form collection at any stage of page lifecycle.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
public partial class DynamicFilterForm : UserControl
{
  public DynamicFilterRepeater FilterRepeater;
  public Type FitlerType { get; set; }

 [IDReferenceProperty(typeof(GridView))]
  public string GridViewID { get; set; }

 [IDReferenceProperty(typeof(QueryExtender))]
  public string QueryExtenderID { get; set; }

  private MetaTable MetaTable { get; set; }
  private GridView GridView { get; set; }
  protected QueryExtender GridQueryExtender { get; set; }

  protected override void OnInit(EventArgs e)
  {
      base.OnInit(e);
      MetaTable = MetaTable.CreateTable(FitlerType);

      GridQueryExtender = this.FindChildControl<QueryExtender>(QueryExtenderID);
      GridView = this.FindChildControl<GridView>(GridViewID);
      GridView.SetMetaTable(MetaTable);

      // Tricky thing to retrieve list of filter columns directly from hidden field
      if (!string.IsNullOrEmpty(Request.Form[FilterColumns.UniqueID]))
          FilterRepeater.FilterColumns.AddRange(Request.Form[FilterColumns.UniqueID].Split(','));

      ((IFilterExpressionProvider)FilterRepeater).Initialize(GridQueryExtender.DataSource);
  }

  protected override void OnPreRender(EventArgs e)
  {
      FilterColumns.Value = string.Join(",", FilterRepeater.FilterColumns);
      base.OnPreRender(e);
  }
  // event handlers ommited
}

Conclusions

While this solution works, I’m a bit concerned about it. Existent infrastructure was in my way all the time I experimented with IFilterExpressionProvider, and I had to look deep inside the mechanisms of Dynamic Data to understand and find ways to come round its restrictions. And this leads me to only one conclusion - Dynamic Data was not designed to provide configurable filtering. So my answer on question about possibility of configurable filtering experience implementation with Dynamic Data is yes, but be careful what you wish for, since it was not designed for such kind of scenarios.

Here I did not mentioned how to save filters, but it is pretty simple, and all we need is to save somewhere associative array of “column-value” for a specific page. Complete source code is available on GitHub and you will need Visual Studio 11 Beta with localdb setup to run sample project.

I would gladly accept criticism, ideas or just thoughts on this particular scenario. Share, do coding and have fun!