Friday 23 August 2013

Odd use of Explicit Interface Implementation

I was trying to write a myConcurrentDictionary.Remove line today, but Visual Studio would prevent me from doing so saying that ConcurrentDictionary<K,V> lacks that method. Well, ConcurrentDictionary implements IDictionary, so it has to feature a Remove method! Looking into MSDN we can see that Remove comes as an Explicit Interface Implementation, so in order to use it you'll need a cast (IDictionary)myConcurrentDictionary.Remove(...);

OK, my understanding of Explicit Interface Implementations is that you use them when your class implements several interfaces with colliding methods and you want to give a different implementation of such method for each interface. ConcurrentDictionary implements 4 different interfaces (IDictionary<TKey, TValue>, ICollection<KeyValuePair<TKey, TValue>>, IDictionary, ICollection) that sport a Remove method. The signatures are a slightly different and somehow colliding (TKey, Object...) so this Explicit implementation makes things clear, but, why don't they add a non Explicit Remove(TKey) method?, that would be the most commonly used. It seems as if they were preventing the use of Remove by sort of hiding it.

Well, some searching confirms that impression. Here we can read:

Explicit interface implementations can be used to disambiguate class and interface methods that would otherwise conflict. Explicit interfaces can also be used to hide the details of an interface that the class developer considers private.

And then we find this and this excellent discussions in StackOverflow, with answers from Jon Skeet:

It allows you to implement part of an interface in a "discouraging" way - for example, ReadOnlyCollection implements IList, but "discourages" the mutating calls using explicit interface implementation. This will discourage callers who know about an object by its concrete type from calling inappropriate methods. This smells somewhat of interfaces being too broad, or inappropriately implemented - why would you implement an interface if you couldn't fulfil all its contracts? - but in a pragmatic sense, it can be useful.

and Eric Lippert:

"Discouragement" also allows you to effectively "rename" the interface methods. For example, you might have class C : IDisposable { void IDisposable.Dispose() { this.Close(); } public void Close() { ... } } -- that way you get a public Close method, you don't see the potentially confusing Dispose method, and yet you can still use the object in a context where IDisposable is expected, like a "using" statement.

The "renaming" thing for Dispose/Close seems a bit unnecessary to me, and as for the "hide to discourage" argument, I lean to see its need as denoting a wrongly designed interface (and interface that can't properly fulfill its contract).

Thursday 22 August 2013

Delegates Caching

I'm not much sure how, but some days ago I came across this interesting question in StackOverflow, answered no more than by 2 of the C# Gods: Erick Lippert and Jon Skeet.

The method that backs the delegate for a given lambda is always the same. The method that backs the delegate for "the same" lambda that appears lexically twice is permitted to be the same, but in practice is not the same in our implementation. The delegate instance that is created for a given lambda might or might not always be the same, depending on how smart the compiler is about caching it.

A lambda expression which doesn't capture any variables is cached statically A lambda expression which only captures "this" could be captured on a per-instance basis, but isn't A lambda expression which captures a local variable can't be cached

So the C# compiler is smart enough to cache Delegate Instances when possible to avoid creating the same instance over an over. This comes to me as a really interesting revelation, as on occasion I've felt slightly uncomfortable when writing code involving many lambdas as it seemed to me like an "object explosion".

I've done some tests to verify the above claims.

The delegate returned below is not capturing anything (it's not a closure), so we can see caching at work!

public static Func<string, string> CreateFormatter()
{
return st => st.ToUpper();
} 
 ...
var func1 = CreateFormatter();
var func2 = CreateFormatter();
Console.WriteLine("simple delegate being cached? " + Object.ReferenceEquals(func1, func2)); //true

If we take a look at the generated IL, we can see the cryptically named field "CS$<>9__CachedAnonymousMethodDelegate1" used to cache the delegate:

On the contrary, if the code returns a closure, it should be obvious that caching can't take place, as we need different instances, each one with access to the corresponding captured values (the trapped values are properties in an instance of a support class that the compiler creates under the covers and that is pointed from the delegate's Target property).

public static Func<string, string> CreateFormatterClosure(string s)
{
 return st => s + (st.ToUpper()) + s;
}
func1 = CreateFormatterClosure("x");
func2 = CreateFormatterClosure("x");
Console.WriteLine("closure being cached? " + Object.ReferenceEquals(func1, func2)); //false

Notice that I'm using Object.ReferenceEquals rather than == to check for object identity because the == operator for delegates is overloaded to do a value comparison. From msdn:

Two delegates of the same type with the same targets, methods, and invocation lists are considered equal.

If we try similar code in JavaScript, we'll see that there's not any hidden compiler trick and no function caching is done, so each time you create a new function, a new function object is being created

function createFunction(){
 return function(){};
}
console.log(createFunction() == createFunction());//false

(function(){}) == (function(){}); //false

To avoid this, I remember having seen in some library code something like var emptyFunc = function(){}; in order to reuse that unique function wherever a "do nothing" function were needed.

Summing up, the C# compiler does a really great job again (as it does with Closures, Iterators (yield), dynamic, async... It's no wonder why it's taking longer than expected to the Roslyn guys to rewrite the Native compiler in C#

Tuesday 13 August 2013

Modify While Iterating II

Last month I wrote about the risks of modifying a Collection while iterating it. Today I've come across a couple of things that complement that entry, so I'll post it here.

Recent versions of .Net Framework brought along 2 very important additions in the land of Collections: Read-Only Collections and Concurrent Collections. Pretty fundamental stuff, but admittedly I hadn't made any use of them until very recently. I had some very wrong assumptions as to how these collections behave regarding the modify while iterating thing, so let's take a look:

Read-Only Collections

I guess due to some incomplete, simultaneous reading of several different pieces of information I had the impression that when you create a Read-Only Collection from a normal collection you were taking a snapshot of such collection, that as such would be independent from the original. Nothing further from the true. As clearly stated in the documentation:

A collection that is read-only is simply a collection with a wrapper that prevents modifying the collection; therefore, if changes are made to the underlying collection, the read-only collection reflects those changes. See Collection for a modifiable version of this class.

I think it's quite important to have this pretty clear, as a common scenario is: your class exposes a Read-Only view of one of one internal collection, and while some consumer threads are iterating over that view your class modifies the internal underlying collection. You'll get the classical InvalidOperationException then. I've written some code to confirm it. You can just disassemble the ReadOnlyCollection.GetEnumerator method and will find this:

public IEnumerator GetEnumerator()
{
 return this.list.GetEnumerator();
}

So the normal Enumerator of the internal collection is being used and this enumerator will do the "have you been modified? check" based on the _version field of the internal collection...

Concurrent Collections

Well, for Concurrent Collections it's easy to deduce that if they allow for Adding/Removing/Updating in parallel, iterating at the same time should not be a problem. Somehow I thought to have read something different somewhere, so I did a fast test to verify that you can continue to iterate a collection that has been modified and no InvalidOperationException will happen.

You could also verify it by peeking for instance into the implementation of ConcurrentQueue, and seeing that it lacks any _version field.

Tuesday 6 August 2013

Maniac

I don't feel much like writing a post now, but after watching this masterpiece I feel compelled to share it with anyone reading this blog. Maniac is one of the best horror films I've watched in a long while. It's extreme, extreme, extreme, utterly extreme... Though set in Los Angeles and starred by North American actors, you could somehow associate it to the New French Extremity school (indeed the director is French).

The story is nothing new, a disturbed young man with a repressed sexuality (due to a trauma from his childhood owing to his mother's unrepressed sexuality) turns into a serial killer. Sure you can think of several films revolving around the same idea, but this one is spiced by some brilliant elements, like being entirely shot from the Point of View of the murderer, the mannequins that give it a feel of arty and "modern horror", and especially the sheer brutality of some of its moments.

Really, this is an absolutely must see for anyone into horror films, but be warned that many people could find it too hard. Indeed, I for one would say there's more blood on screen than necessary, and the last sequence of the film was quite unnecessary (a gore feast that adds no value at all). But well, perfection is the biggest of horrors.... By the way, this is the remake of a film of the 80's, so probably I should also give it a try.

Saturday 3 August 2013

Multiple Inheritance in C#

From the many features being added to Java 8, there's one that has really caught my eye, Default Interface Implementation aka Defender Methods (the other ones are really necessary stuff, but nothing out of the ordinary, as it's stuff that should have been in the language many years ago).

The main motivation for these Default methods is the same as for Extension Methods in C#, allowing you to add methods to an interface without breaking existing code. Let's think in C# and the pre-Linq times. You had an IEnumerable interface, and felt the need to add to it methods like "Contains, All, Any, Skin, Take...". Well, if you just add those methods, all your existing classes implementing IEnumerable would need to be updated to add an implementation of those methods there... well, quite a hard job. The solution to this in C# were Extension Methods. In Java they were about to mimic this same approach (and indeed you'll still find old references to "Java Extension Methods") but in the end they opted for a much more powerful one, Default Interfaces.

public interface SimpleInterface {
public void doSomeWork();
//A default method in the interface created using 'default' keyword
default public void doSomeOtherWork(){
System.out.println('DoSomeOtherWork implementation in the interface');

You've always been able to implement multiple interfaces, now they're adding behaviour to interfaces, so this winds up in you being able to inherit behaviour from multiple "places"!!!

Extension Methods in C# also adds behaviour to interfaces, and as such you also get a sort of Multiple Inheritance, but in a quite more "second class" limited way. Extension Methods are just a compiler artifact and the method resolution is done at compile time, so you lose the runtime magic of polymorphism-overriding-vTables. When you extend an existing Interface with new methods, if then a derived class implements one of those extra methods, polymorphism won't work to invoke that overriden method. Let's see an example:


public interface IPerson
{
 string Name{get;set;}
 string SayHello();
}

public static class IPersonExtensions
{
 public static string SayBye(this IPerson person)
 {
  return person.Name + " says Bye from Extension Method";
 }
}

public class Person:IPerson
{
 public string Name {get;set;}
 public Person(string name)
 {
  this.Name = name;
 }
 
 public string SayHello()
 {
  return this.Name + " says Hello";
 }
 public string SayBye()
 {
  return this.Name + " says Bye";
 }
}



public class Program
{
 public static void Main()
 {
  //the extension method is good to add a SayBye to the IPerson interface
  //but as a compile time artifact, it will not take into account if the implementing class has "overriden" it
  IPerson p1 = new Person("Iyan");
  Console.WriteLine(p1.SayBye()); //writes "says Bye from Extension Method"
  
  Person p2 = p1 as Person;
  Console.WriteLine(p2.SayBye()); //writes "says Bye"
 }
}

In the example above, the IPerson interface has been extended with an additional method SayBye through the IPersonExtensions static class. Then the Person class tries to override SayBye with its own implementation, but polymorphism won't work when it's invoked in a Person object via IPerson, and the IPerson implementation in the Extension Method will be used rather than the one in Person.

Other limitation of Extension Methods is that they are not visible via Reflection, I mean, if you call Type.GetMethodInfos() it won't return in that list those methods that could be accessed in that type via Extension Methods. As a consequence of this, they don't play well with dynamic either when you expect that dynamic resolution to be done through Reflection. You'll find more information on this here and here

With all this in mind, I decided to simulate this "Multiple Inheritance of Behaviour" in C#. The idea is simple and effective, though not much wrist friendly. For each interface whom you'd like to add behavior you create a class that implements the interface and contains those "default methods", and then, for your normal classes implementing that interface, you'll add a reference to that class for the default implementation, and for those methods for which you don't want to override the default implementation, you just delegate calls to that reference.

public interface IValidable
{
 bool AmIValid();
}

public interface IPersistable
{
 string Persist();
 
 int EstimateTimeForFullPersist();
}

public class DefaultValidable: IValidable
{
 //just one single method, no calls to other methods in the class, so no need for an Implementer field
 public bool AmIValid()
 {
  return this.GetType().GetProperties(BindingFlags.Public|BindingFlags.Instance).All(PropertyInfo => PropertyInfo.GetValue(this) != null);
 }
}

public class DefaultPersistable: IPersistable
{
 public IPersistable Implementer { get; set; }
 public DefaultPersistable()
 {
  this.Implementer = this;
 }
 
 public string Persist()
 {
  //notice how we have to use [this.Implementer.Estimate] here to allow method overriding to work,
  //cause using [this.Estimate] would invoke the Default (NotImplementedException) one.
  if (this.Implementer.EstimateTimeForFullPersist() > 1500)
   return this.ToString();
  else
  {
   //complex logic here
   return "this is the result of a complex logic";
  }
 }
 
 public int EstimateTimeForFullPersist()
 {
  throw new NotImplementedException();
 }
}

public class Book: IValidable, IPersistable
{
 protected IValidable ValidableImplementation { get; set; }
 protected IPersistable PersistableImplementation { get; set; }
 
 public Book(DefaultValidable validableImp, DefaultPersistable persistableImp)
 {
  this.ValidableImplementation = validableImp;
  this.PersistableImplementation = persistableImp;
 }
 
 public bool AmIValid()
 {
  //delegate to default implementation
  return this.ValidableImplementation.AmIValid();
 }

 public string Persist()
 {
  //delegate to default implementation
  return this.PersistableImplementation.Persist();
 }
 
 public int EstimateTimeForFullPersist()
 {
  //do not delegate to default implementation, "override" it
  return 50;
 }
}

public class Program
{
 public static void Main()
 {
  DefaultPersistable defPersistable = new DefaultPersistable();
  Book b = new Book(new DefaultValidable(), defPersistable);
  defPersistable.Implementer = b;
  
  Console.WriteLine("Is the Book valid: " + b.AmIValid().ToString());
  Console.WriteLine("Book.Persist: " + b.Persist());
 }
}

Looking at the implementation above you'll notice that the code is more straightforward for DefaultValidable than for DefaultPersistable. No defaul method in DefaultValidable invokes other methods in the interface, while in DefaultPersistable the Persist method invokes EstimateTimeForFullPersist, which means that in order to invoke the correct implementation if EstimateTimeForFullPersis has been overriden, we have to use the Implementer reference for those invokations.

You should also notice that while the above technique allows "Multiple Inheritance of Behavior" it does not address the real motivation of Default Methods in Java, extending the contract of an existing interface with new methods without breaking existing code. You still need to resort to Extension Methods in C# for that.

All this has reminded me of an interesting post I read months ago about using ES6 proxies as a way to implement multiple inheritance in JavaScript. The idea is pretty interesting, but I see an important flaw, the instanceof operator won't work with the "base classes". Applying instanceof to the proxy object will tell you that it's neither an instance of base1 or base2. This could be fixed if instanceof were also interceptable in the proxy, but seems like (at least in the current proposal) it's not.

By the way, as it's somehow related to this article, I'll reference here my write up about Extension Methods and Mixins/Traits from last year.

Thursday 1 August 2013

Scandinavian Crimes

My reading habits (technical or social stuff aside) are pretty lame, with an average I guess of 3 books per year. Moreover, I'm not into the classics or anything of the sorts, and in the last years almost all the literature that I've read has been Scandinavian crime novels (yeah, pretty trendy), and I have to say I really love this "literary school". I started off with Stieg Larsson (yes, I got totally hooked to the Millennium Trilogy, even when I find some failing points in it) and then went on to deeply enjoy with Asa Larsson and Camilla Läckberg.

These to Swedish ladies are really excellent writers. Their plots are pretty good, but what I like most of them (and that I think is the reason for having got so trapped by their novels) is the tortured characters and their outstanding emotional depictions. These novels are inhabited by tormented souls, people that apparently live a normal life, but that to a greater or lesser extend live surrounded by (their own) phantoms, haunted by remorse, tortured by some old bad decisions that have left a bitter taste in their mouths, a taste of failure and incompleteness. Sometimes the source of all this discomfort are large tragedies, but for other characters it's a concatenation of small inconveniences what makes up their personal landscape of desolation. A vast, wild (and snow covered, it could not be otherwise) landscape, overcast skies, deep lonely forests or small communities where the permanent gossip isolates those that consider themselves in the losers side... that's the another essential part of these novels.

In the last months I've read "The Hypnotist" (pretty good, I'm longing to get my hands on the film adaptation) and Jo Nesbo's "The Leopard". The Leopard is a delightful book, I liked it quite more than "The Snowman" that I read last year. The emotional component in this book is quite less present than in Asa Larsson or Camilla Läckberg (even when the main character, Harry Hole is a fucked up man lashed by addiction and guiltiness), but on the other side the plot is quite more complex and twisted, and will keep you glued to its pages since the first line.

Harry Hole makes some excellent reflections on the human nature (that he happens to be not too fond of) and among them, I'd took note of this paragraph:

That's how banal we are. We believe because we want to believe. In gods, because that dulls the fear of death. In love, because it enhances the notion of life.

Pondering on these books and authors a bit more, I think my winner is Asa Larsson, mainly due to the soft fantastic element that she adds to novels like "The Black Path" (my favourite) or "Until Thy Wrath be Past". This component is unique (I think) when compared to other authors and I would say is the cherry on top of a carefully cooked delicious cake.