Gamasutra: The Art & Business of Making Gamesspacer
Regarding the future of C# ...
Printer-Friendly VersionPrinter-Friendly Version
arrowPress Releases
April 20, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM TechWeb sites:


 
Regarding the future of C# ...
by Pedro Guida on 01/13/14 03:01:00 pm   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

A couple of weeks ago on "This Week On Channel 9" series there was a reference to Mads Torgensen's presentation in London regarding "The Future of C#", announcing new features that could probably get implemented in C# 6.0.

So, in this post, let me explain some of the features that I hope they implement in C# in the short/middlle run:

1. LLVM-Like Compiler For Native Code

I talked about this many times, but I think it's a perfect time to mention it again.

So far, if you want to compile MSIL to native code at once, you can use a tool called NGen, which creates a native image of the code for the machine where compilation is being done. The problem with this tool is that its main purpose is to reduce startup times. Meaning? You won't get optimized bits for the whole code; just for the blocks first executed when the program starts.

Imho, we need more ... in this relatively new world of app marketplaces it'd be really handy to count on a model where you can deliver optimized native bits to the device/console/machine where the app would be downloaded and installed, don't you think?

Picture it like this: say you create an app/game with C# for the XBox One (using portable assemblies or not) and compile your source code to MSIL. Since the hardware of the console is the same in terms of main components (processor, memory, etc.) then why not compiling the whole MSIL code to native bits optimized for the Xbone console at once? (either on your side or on MSFT servers' one)

With a LLVM-Like compiler this could be achieved and extended to other platforms. But wait a minute! Isn't it what MSFT is doing for WP8? It sounds like it. But wait a minute, again! Isn't it something like the AOT compilation that can be found in the Mono Framework? If the latter gives optimized bits for whole assemblies per platform then it is!

In fact, many sources mentioned the so colled "Project N", which would be used to speed up Windows 8.1 apps. What is more, a few sources also mention that MSFT is working in a common compiler for C++/C#. I hope it also brings a more performing way to do interop stuff with C++.

True or not, this is a "must have" in order to end the C++/C# perf discussion!

2. "Single Instruction, Multiple Data" (SIMD)

In modern HW architecture, SIMD has become a standard when you want to boost performance in specific operations, in particular, ("vectorized") mathematic ones.

As a mather of fact, C++ counts with DirectXMath libraries (based on the formerly called XnaMath ones) which do implement SIMD, but unfortunately do not support C#.

Again, SIMD is already present in the Mono Framework for Math operations, so why not adding it to the .NET Framework once and for all?

I hope MSFT listen to us ...

3. Extension Properties

We have extension methods, so this is a corolary of it. Today, you can only implement getters (and setters) like this:

   public static string NameToDisplayGetter (this IPerson person)
   {
      ...
   }

Then, why not having something like this?

   public static string NameToDisplay: IPerson person
   {
      get { ... } // You could also add a setter, if needed.
   }

Of course that the syntax in the example above may vary, so I guess you get the idea here. There are several use cases where a feature like this could come handy, including MVVM or plain INotifyPropertyChanged ones.

4. Generic Enum Contraints

Generics is one of my favorite .NET features. There's lot of things that can be achieved through it but it has still room for improvement.

One of the things to improve are constraints. So far, when dealing with types of software elements, we have only two: class and struct. So, what about enums?

Currently, if you want to mimic an enum constraint you will have to write something like ...

   public void OperationX(TEnum myEnum)
      where TEnum: struct, IComparable, IFormattable, IConvertible ... and so on so forth.
   
{
      … usual stuff …
   }

... and also, given that you are dealing with a subset of elements that approximate to an enum, you need to check whether and enum has been passed, generally throwing an exception if not:

   if (!typeof(TEnum).IsEnum)
   {
      throw new ArgumentException("The passed type is not an enum");
   }

Why not simplify it to something like this?

   public void OperationX(TEnum myEnum)
      where TEnum: enum
   {
      … usual stuff …
   }

Not only it makes sense, but also would simplify things a lot as well as open a wide range of handy operations and extension methods.

5. NumericValueType Base Class

.NET's CLR treats structs in a special way, even though they have a base class: ValueType.
I'll not be explaining here the characteristics of built-in primitives and structs; instead, I'll ask the following question: in the current version of C# can we achieve something like this ...?
 
   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: struct
   {
       return number1 + number2 * number3;
   }
 
The answer: not without proper casts. So, a life changer for this type of situations would be to add a specification of ValueType that enjoys the benefits of structs and also supports basic math operations without any kind of wizardry: NumericValueType.
 
With that class and a new reserved word like, say, "numeric", "number" or "primitive", we could write generic operations and extension methods with a syntax as simplier as:
 
   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: numeric
   {
       return number1 + number2 * number3;
   }
 
How about declaring new type of numbers? Easy ...
 
   public numeric Percentage
   {
      … usual stuff …
   }

... or ...

   public numeric Half
   {
      … usual stuff …
   }
 
No need to specify "struct" since "numeric" would be a value type that supports basic math operations (that we would need to implement when we declare the type, maybe, by overriding some base operations), and so in common scenarios there would be no need to cast values to do math.

6. Declaration of Static Operations On Interfaces

Put simply: having the possibility of declaring static operations when writing interfaces; like this:

   public interface IMyInterface
   {
      static void DoStaticOp();

      static bool IsThisTrue { get; }

      ... instance properties and operations ...
   }

This presents a challenge to both, polymorphic rules and abstract declarations, that is, at a "static" level. But as usual, with the proper rationale and care when modifying the language specs, it could be achieved. Maybe, many of you would be asking: "why bother?" but believe me when I say that I happened to meet situations where static operations on intefaces would have come really handy.

Well, this is it for today. What would you guys like to see implemented in C# 6 and beyond?

Comments are welcome,
~Pete


Related Jobs

Activision Publishing
Activision Publishing — Vancouver, British Columbia, Canada
[04.19.14]

Principal Graphics Programmer
Insomniac Games
Insomniac Games — Burbank, California, United States
[04.18.14]

Associate Engine Programmer
Insomniac Games
Insomniac Games — Burbank, California, United States
[04.18.14]

iOS Programmer
Insomniac Games
Insomniac Games — Burbank , California, United States
[04.18.14]

Senior Engine Programmer






Comments


Amir Barak
profile image
"6. Declaration of Static Operations On Interfaces"
This makes no sense though; what's the point of a static polymorphic function? How could you even implement it without the ability to trace it through a virtual table? And if we have a virtual table what's the point of making it static?

I remember having one situation a few years ago that I thought this might be useful and then realized my design was flawed and it wasn't the right answer (wish I remembered the how/why of this situation).

I second and third on having a struct constraint and making enums properly adhere to the rest of the language. As well as having property type extensions (although I'd argue against usage of properties at all to be honest since they're a hindrance more often than not).

John Maurer
profile image
Static methods exist even if the object they belong to hasn't been instantiated yet (See singleton pattern), you don't need a virtual table.

Static Polymorphism allows for variance at compile time. This is particularly useful when dealing with interfaces and class hierarchies, take the prototype pattern for example. The pattern itself simply kicks out a copy of itself, if done well, its a copy of itself minus the prototype features. If you wrote an interface for prototyping and inherited from a template parameter (meaning we could dictate the type of our prototypical instance at compile time) your going to save yourself a lot of work.

Generics plays a big part in building robust toolsets, through meta-programming you can define techniques without tying them to the objects that use them. Providing static operations to interfaces is only going to make that easier.

Amir Barak
profile image
Static functions exist outside the scope of objects completely. They're a kludge into objects (especially in a language like C#) because you cannot have free floating functions (like in C for example). Let's not talk patterns here, let's talk about specifics. Give me a production level example (that is, without foo and bar, I hate those).

There's no doubt generics play an important part of any toolset it seems however that you're talking closer to the concept of templates in C++ than Generics in C# (which are much more limited). Meta-programming is nice and all but overuse of generics and "patterns" usually lead to convoluted APIs in my opinion.

Pedro Guida
profile image
Thanks for your comment.

I won't argue whether there is a point or not in allowing static ops on interfaces since it has been discussed many times:

http://www.google.com/search?q=static+method+interfaces+c#

But I must admit that I can live without it.

I believe this is more a technical limitation on C# based on comparing benefit with complexity, especially when one can find alternative solutions to a specific problem without the need of such feature. And that's why I stated that presents a challenge.

I don't know why, but in a way (or to some extent, if you prefer) I always thought that default(T) is a workaround to something like T.Default where T: ..., IDefaultable<T> (with a static getter property "Default").

Amir Barak
profile image
I would like to know the situation you're referring to where static methods on interfaces make sense?

Pedro Guida
profile image
Well, as you said in a post above: "whish I remembered the how/why of this situation"

Toby Grierson
profile image
Why is the native code important? .NET, like Java, is compiled to native as necessary to the platform in question. This could be cached by the runtime library to reduce the incidence of early-session lurches, but there is little reason to deploy native code. You can make arguments for situations where there aren't problems, but you still have to justify the effort and complexity of creating and maintaining this option's existence, and if it isn't broadly use than it will surely rot. In any case performance concerns are more complicated than whether it's native or not and having used Java and C# for years and having written performant code in both, this whole discussion actually feels rather remote and I'm surprised to have encountered it again anywhere. I remember having discussions about it a real long time ago and concluding that anyone arguing about C# vs. C++ performance is probably screwing around.

Josiah Manson
profile image
Because there is a huge speed difference. A couple of years ago I tried switching from C++ to C# but C# was 5x slower. That isn't something you can hide under a rug. Things may have changed in the intervening years, but I doubt it. In 2010 when I tested performance, JIT compilation had been standard practice for several years already and people had already been saying that the performance is nearly identical between languages for years. My data linked below proved to me that those claims are false. I won't deny that C# is more convenient and may save programmer effort, but speed parity is a lie.

http://josiahmanson.com/prose/speed_cpp_csharp/

Josiah Manson
profile image
I forgot to mention. You may be right that native code does not help much, because it could be that the nature of C# limits performance rather than the instructions. I would be quite interested to know what the actual limiting factor is between C++ and C#. Lack of template specialization? Too many pointers? Lack of native code? Garbage collection? Worse optimization by the compiler?

Brian Milligan
profile image
Working on smaller processors, like Mobile have brought more people back to native, and when it comes to console performance where you are trying to maximize your hardware no one ever left native. Apple really showed what objective-C did for the iOS platform performance and MS in response has made a strong push for "going native" for the Windows 8 ecosystem.

Pedro Guida
profile image
Agreed. That's why having a common compiler for c++ and c#, project n and m# are so important.

Daniel Lau
profile image
As a Professor of image and signal processing, let me also chime in that SIMD instructions are vitally important for high performance processing. And compiler technology has not reached a point where I can rely on it to convert my non-vector for loops into sufficiently optimized SIMD instructions. There's just no getting around it. High performance computing applications require native (i.e. intrinsic) code. And its not that big of a deal converting between Intel SSE into ARM Neon instructions. In fact, Intel use to have a header file that translated ARM Neon instructions into Intel SSE instructions as an attempt to get ARM developers to port their code to Intel.

In Toby's defense, there is a very large population of computer scientists who believe very strongly in abstracting away the hardware, and to write code that is not universally portable from one device to another is almost sacrilege.

Paulo Pinto
profile image
The new JIT compiler will use the same backend as VC++.

http://blogs.msdn.com/b/dotnet/archive/2013/09/30/ryujit-the-next
-generation-jit-compiler.aspx

http://blogs.msdn.com/b/dotnet/archive/2013/11/18/ryujit-net-jit-
compiler-ctp1-faq.aspx

Paulo Pinto
profile image
Well, .NET on Windows Phone 8 is already compiled to native code, using the MDIL compiler, a outcome of the Singularity research OS.

Google on their side is taking the steps to replace the Dalvik VM with ART, a native runtime with ahead of time compiler.

Pedro Guida
profile image
"Google on their side is taking the steps to replace the Dalvik VM with ART, a native runtime with ahead of time compiler."

And portable native client.


none
 
Comment: