Home .NET Interview with Miguel de Icaza: Microsoft, Mono, smartphones, and more

Interview with Miguel de Icaza: Microsoft, Mono, smartphones, and more

by admin

Y Miguel de Icaza… a lot of credit in the past (creating GNOME, Mono, Xamarin, and more), but he’s not living on past credit, and continues to work hard – now at Microsoft, whose relationship was once uneasy.

So at the conference DotNext we asked him questions about both :

About the past : the beginnings of Mono, the relationship with Microsoft, and so on.

About the present : where did it all come to? How does Miguel, who previously created Xamarin, look at modern cross-platform mobile development? And what is he doing now?

And now we’ve made a text version of this interview for Habr.

Mono and the problems of choice

– Miguel, you gained fame for creating tools like Midnight Commander, GNOME, tools for the Linux world. What prompted you to consider bringing .NET to Linux, given that there was already cross-platform Java and maybe some other cross-platform solutions?

This is a great question. I’ve run into it before, but you added one interesting factor : why would you do this if you already had a solution, like Java. You know, I look at the calendar, I’m almost 50, and I think many current developers are much younger, they don’t remember working back then. I think it’s curious to start by understanding that when we started this project around the 2000s, people had fairly bulky computers that had 64 MB of RAM at best, and that was quite a lot. Today we measure everything in gigabytes, even the smallest computers have 8GB of RAM, which is a lot more than we could have dreamed of back then. I don’t remember exactly how much my laptop had, I think it was something like 16 or 32 MB.

Another thing to keep in mind is the Java language at the time of the 2000s. Its source code was not open source. Java was free, you could download it and use it, but there was no access to the source code, you couldn’t modify it. I was pretty active in the open source community at the time, especially in the GNU project, which then gave rise to GNOME.

One of the main principles of free software is that it is completely open source. That means you have access to that code and you can get rights associated with it. For people back then, and for me personally, it was important to create some sort of tool for all of humanity. That’s what’s called open source, it belongs to all of us. And when I contribute to the source code, it belongs to me and you equally, you have the same rights to change and modify it as I do.

At that time Windows had already gained popularity, the Mac was kind of on the periphery, and there were also closed operating systems like Solaris and others. We were trying to create a future where we had a completely free OS. This sounds kind of crazy and it is important to realize that Windows was the absolute leader at the time and the idea of building a new OS from scratch was considered ridiculous. Linux had some success on the server side back then: Red Hat came out and people used it for hosting web sites, that was at the beginning of the growth of the Internet, for file exchange servers. That’s how Linux came about, it was at the beginning of GNOME, in 1997 or so, back then computers had even less memory.

Quite a humble beginning, but we were building something on one principle, which was that everything had to be open source, and Java didn’t have that at the time. So we couldn’t build the future on a technology that didn’t set the same rules for everybody, where some had more rights than others. So Java fell away right away. In fact, a few years before Mono, there were attempts to create an open source incarnation of Java (Kaffe Virtual Machine and probably some others). The problem was always the user interface; we started with virtual machines, but they had a complicated interface, and there was even an attempt to copy AWT (Abstract Window Toolkit) based on the separate Bliss toolkit.

The main problem we had to solve in 1998 or 1999 was that Java was free, but it was not open source, and people couldn’t contribute to its development, so it moved slowly. So open source Java was not widely adopted because there was already a good alternative in the form of an official version. That did not work, and we could not rely on that experience, but it was obvious after many years of developing classic Linux software that it was incredibly difficult to use C or C++. Even when we launched GNOME in 1997, we knew that these languages were not the ideal choice. And if you look at the announcement of the project, it says that we wanted to use high-level languages and at first suggested using Scheme, which was the preferred language of the project, but we wanted to add other scripting languages as well.

It became sort of a GNOME theme that we wanted to raise the level of programming and abstractness and we didn’t want to use low-level code. This was all back in 1997 or ’98, when computers could not boast of power. If I’m not mistaken, the first program we wrote in Scheme was a little interface for running the ping and netstat commands. It was a simple interface done with GTK library and written in Scheme, but the problem was that it took about 17 seconds to start. We tried Scheme, but at the time it was not much optimized and was quite slow, so we had to go back to C. I think if you look for the first versions of GNOME you will find code in Scheme and immediately realize that we had to rewrite it because it was incredibly slow.

You have to remember that a lot of this kind of thing was developed in a world that didn’t have the kind of power we have now, so we had to go back to C.

Mono came along when Microsoft announced .NET. And it was the mix we were looking for: a mix of high level language (C#) and performance. Yes, it was possible to achieve performance with Java as well, but .NET had a few extra decisions that played to our advantage, such as the introduction of structures vs. classes. Remember, this is only the early 2000s and such things were important then, it’s now that people are used to all the innovations of that time.

We started looking at implementing, we got some incomplete specifications, and IBM was involved in the process – IBM and Microsoft, as I recall, were not exactly on good terms – it was involved in the ECMA standardization process and invited us to participate in the process as invited experts. At that time, we managed to get enough documentation from ECMA that allowed us to create Mono. I may have messed up some of the details, but it’s not that important.

Basically .NET was exactly what we wanted to use to create classic applications on Linux, so that’s how the project started. It turned out to be something we could do, and later on we started to use it a lot more. This is how it all started: We wanted to work with better systems, Microsoft had such a system and it was in the right place at the right time.

I think if someone wanted to create a similar project now, you could just use JavaScript or Python, because today’s computers are so powerful that if you started that project nowadays, you would make different decisions. Nevertheless, I think .NET was a great choice at the time, but as it turned out, a political choice. And it was highly politicized because, if you recall, Microsoft at the time was opposed to open source.

From a technical point of view it was a great project, I don’t have any regrets about it. There was a lot of stress when I started building the company with this project because people didn’t want to be a part of it, but fortunately, those who ended up joining the project have made invaluable contributions to Mono.

Back to the original question. If you compare Mono to open source Java, open source Java didn’t get a lot of support in the open source community, and the community that formed around Mono was huge. So many people contributed to the development of this project in the early days. It’s been amazing to watch it grow.

This is why we chose .NET instead of Java, because it was open source in the whole sense, even though that changed later, but then it was too late.

Non-copying policy and relationship with Microsoft

– Great, thanks, I have a question regarding my pull request for Mono that I sent 6 years ago. It was a small request where I tweaked the Stopwatch class and solved the deadlock where the past tense value is negative by replacing it with 0. I also attached a link to the Microsoft source code from their website saying "Hi, they already have this fixed, time to fix it in Mono too" but the request was closed with the message : "Please do not attach any Microsoft source code". I re-created the request and it was approved a year later by you Miguel, this was already in 2015, then the relationship between Mono and Microsoft had changed somewhat. My question is about Mono’s "no copy" policy, where you, in simple terms, could not take Microsoft code. I think it was difficult to implement some simple methods in the Base Class Library, and in most cases there was only one expedient implementation that was very difficult to rewrite. So how did you deal with such cases?

Let’s talk about politics first. You can trace it all the way back to the GNU project. Remember that we were trying to make a public tool, and you could not just take somebody else’s work and make it public, we did not have that right. So it was important both for the GNU Project and for us to have a good history, that is, to create code based only on public data. One of the practices we introduced in the beginning was to ask users to write tests for their code, because many people at the time simply sent in decompiled files. We insisted that people test their code first and only then send it in. That way we could be somewhat sure that people knew what they were doing. That way is not a panacea, of course, but it helped us to reduce the number of people who just used the decompiler that was in use at that time, and sometimes even sent in fixes with viruses.

This was the "don’t take anything from me" policy of the project, and it originated in GNU. We were always in favor of the idea that we could do better, and in some cases we succeeded, and in other cases people weren’t interested enough to fix this or that thing. Among the latter you can find things that weren’t implemented. In the end, there were a lot of things where it was either too difficult, or there wasn’t enough interest from people, enough users, or just enough demand.

Mono’s support and capabilities were very unevenly distributed. The most popular APIs received the best optimization and support, the least popular sometimes were not implemented at all. It’s a kind of built-in study of customer needs. At Microsoft, for example, you have to do this, which means communicating with customers, prioritizing, collecting data, building graphs, and drawing conclusions about what customers need and what they don’t need. With open source, that happens somehow on its own. For example, at one point we thought WCF was a great idea, and we even started working with it a little bit. However, many of its complicated parts were not implemented, and in many cases no one cared. It turned out that WCF created a lot of features that few people used. Open source to some extent allows you to keep track of what’s used more often and what’s used less often. There’s a lot of difficulty tied up in that when you’re trying to recreate an entire platform because you don’t have that data to begin with.

The relationship with Microsoft has changed over time, it’s been quite curious to see this. Microsoft began its relationship with open source from an antagonistic position and saw it as a threat. This was all before the cloud era, before people realized that many conventional business models would run out of steam. There were a lot of changes, one of which was the ECMA process, which resulted in more available documentation. The API documentation that Mono had been using for years was based on the ECMA documents. Microsoft had to license its documentation for public use, so we took that as a basis and redid it for Mono.

There’s even a whole story associated with it. We wrote a lot of tools to import ECMA documents. Microsoft just took what they had internally, exported it into XML format, which they passed on to ECMA, and we took that documentation and created a whole set of API documentation tools. Subsequently, we used that set for other APIs as well, extending and augmenting it. Years later, our relationship with Microsoft got better, and they once again exported their documentation to XML and passed it on to us, after which we imported it and put it in the repository. Then a few more years go by, and it turns out that their tools they used to maintain their documentation turned out to be unworkable. Nobody knows how to download the documentation : The tools don’t work, and the people who knew how things work are no longer part of the company for one reason or another. It so happened that the only existing source of documentation was the Mono documentation. We imported all of the Mono documentation back into a suitable Microsoft format.

This led to a funny turn of events. There is an API in .NET called system.net.mail. In my opinion, it was probably developed by one of the interns over the summer. This API is pretty bad in the sense that it was probably some sort of toy API used to send simple messages. It’s full of design flaws, bugs, limitations, and it’s not up to modern standards. Yes, it might have been quite appropriate in 2002, but not for the modern era with its various security measures. So, we corrected the documentation and wrote: "please don’t use this API, it’s horrible. You’d better use another open-source project called MailKit and MimeKit, which one of the Mono developers wrote", and labeled that API as obsolete. The same was in our documentation.

When Microsoft imported our documentation into its official documentation, of course it became the official position, because the comment I left years ago didn’t go anywhere. People were furious, many of them had been using this API for years and resented being forced to use something else. You can still find almost 300 comments on Github about it. People were quite upset about this side effect of importing the documentation back in.

The funny thing is that all of the .NET documentation is based on the Mono documentation. It has changed and improved a lot, supports a lot more functionality, and has a lot more people working on it. The Mono tools have become what actually supports the Microsoft documentation now. All of this is a long way of saying that we started with a little collaboration, and then there were little attempts to put our internal work out in the open. Various project managers and engineers at Microsoft argued that we needed to put some technology out so that we could use it, I have no idea how they could convince management at the time. Nevertheless, technologies like DLR (Dynamic Language Runtime), Math (I’m not the biggest fan of Math, but it was one of the very first things Microsoft put out in the public domain), also part of ASP.NET or some extension of it, which was a package with extra features.

Then, in 2007, in the midst of Mono’s policy against open source, when the company was owned by Novell, an agreement was signed with Microsoft that was considered an incredible betrayal of the open source community. That agreement made life much harder for many politically, things were just as bad as they are now in the world, only within the software realm.

I still thought I should use .NET to create classic applications on Linux, and Microsoft introduced Silverlight. Then for three weeks I described in blog at , how a team of 10 people, including myself, implemented Silverlight on Linux at a level sufficient to confirm the viability of the idea. We got involved because someone from Microsoft’s French division didn’t ask permission and wrote to me offering to be the keynote speaker on Silverlight at a conference about .NET. I said yes, and my talk was basically an implementation of Silverlight on Linux in 3 weeks. I finished merging the team’s code 20 minutes before the talk. It ended up leading to a closer collaboration with Microsoft, and the person who invited me must have gotten a slap on the wrist for it.

As part of the regular meetings between Microsoft and Novell, we had a meeting with Bob Muglia in the Novell office, at which time I showed him what we could do, and he was impressed. After that we signed a collaboration agreement that gave Novell the audio and video codecs. This was incredibly important for Linux, for us, and it’s a very complicated issue involving patents and licenses and so on, so I won’t go into details. But we got them, and we also got a set of tests for the major .NET libraries. So for everything that Silverlight used, we had tests. It was a way to test Mono, so we fixed just a huge number of bugs in absolutely every aspect of the project. The process took a few years, because Microsoft was pretty reluctant to share internal test suites. While this was a formal agreement, not the whole team was involved in the process.

Then .NET started to have some problems. I can’t remember exactly now, but when Windows 8 was released, .NET somehow took a backseat, Silverlight in a sense was no longer in demand. But we found a new vocation. While Microsoft was hard at work promoting Windows 8, we realized that Microsoft and Mono were very politicized in the open source world, and we decided that we could bring .NET to mobile devices, because people there didn’t care about politics, they were interested in selling applications, and C# was better than Objective-C and Java. We actually abandoned the open source world and started selling the product for mobile devices, and I must say how great it was. On the one hand, I love open source. On the other hand, how nice it is when you don’t have to deal with very opinionated people, with complaining, whining, attacking. In short, I loved the transition to working with closed source, and for a multitude of reasons it was great.

At the time, the relationship with Microsoft was different. The company later realized that .NET was the key to Windows’ success. It was somewhat forgotten, because it was thought that JavaScript was the future. And then it turned out that everyone liked .NET, and mobile devices was one of the components that would help it stay relevant to a large part of the user base. So we started collaborating more closely, having more frequent meetings with Microsoft staff and engineers, but with mixed success. At some point later, the head of the cloud division invited us to give a presentation to his department, after which we developed a very good working relationship, and then he was appointed CEO, we were happy about that.

Somehow that’s how Mono’s relationship with Microsoft evolved. Subsequently, they acquired us, and now Mono is part of .NET. Things are very different today than they used to be.

User interface, Mono and .NET, relationship with Mac

– Thanks for the detailed story and historical background. We’ll come back to Mono and .NET, but before that and before we talk about Xamarin and mobile devices and stuff, I wanted to ask a slightly different question. You mentioned that your goal was to create classic applications for Linux, and my impression of Mono has always been that it’s more of a pure server-side terminal without any interface, and the only interface that exists in .NET is WinForms and later WPF. What is the history of the interface on Linux for Mono and .NET?

Wonderful question. Mono started out as a way to fully support GNOME. We made the .NET binding to the native API, GTK, which is used in GNOME itself. In the days of Novell, we used it to create several key applications. We used it to create photo apps, a music player, a builder, an IDE and more. All in all there were about 6-7 such applications created with Mono that were included in Novell’s version of Linux. Ubuntu included some of them, if my memory serves me correctly.

As I said, there were a lot of policies back then, and there were a lot of attempts to implement them in other languages, so that nobody would be "contaminated" by Microsoft’s technology. We wrote some applications and they were included in some Linux distributions, but at the time, because they were too politicized, they did not find much success. For us, however, it was a great success, since we believed that nobody else had written so many classic Linux applications.

When we moved to mobile devices we saw what people in the world were capable of: we were proud of 6 or 7 applications on Linux and within just 6 months we had more than 300 applications built with Mono on mobile devices, and since then the number has grown considerably and is still growing.

Like you said, people associated .NET primarily with miscellaneous server stuff, because most of the classic Linux applications are only used by some industry professionals, and those applications haven’t become a major player yet. The choice of classic Linux applications is somewhat limited these days, and if you’re lucky you can get something like Electron, or something based on it. The time for classic Linux apps is not yet upon us, and obviously the huge market for OSs like iOS and Android is where all the action is at the moment.

Anyway, we had a toolkit, we liked it and we tried to copy WinForms, which was a very hard thing to do because it would require recreating Win32 anyway. I wouldn’t recommend that to anybody.

– Yes, but nevertheless Microsoft still did it to some extent in .NET 5. We are talking about Linux, but MacOS also pays a lot of attention to classic applications, and my question is : what is the relationship between Mono and MacOS?

– This is a great question! Before I answer, I should tell you that the WinForms support that Microsoft has implemented is only available on Windows, while we tried to recreate it on other platforms.

MacOS has a different API, AppKit, and lots of other frameworks. AppKit is kind of a framework with which you can build user interfaces. And we just created a binding, it’s called Xamarin.Mac. If you have Visual Studio on MacOS and you’re making a project for it, you’re going to have to use AppKit one way or another. Unless, of course, you want to create a native MacOS app.

In the case of .NET, I think that almost all existing platforms have a binding to native APIs. Today you can always use the native API to communicate with the host. There are many attempts at abstractions, whether it’s Avalonia, Xamarin Forms, and others. All in all, there’s a big choice for people who want to write code once and run it anywhere, subject to various constraints. Or you can build a link directly to the platform’s native API.

We call the bindings to AppKit Xamarin.Mac, and they consist of about 60 frameworks. You can think of them as, say, a way to communicate with DirectX. You have an API that you use to communicate DirectX, OpenGL, or Vulkan. There are people who want to use those low-level APIs, and there are people who want to do something three levels higher, like display a cube, and it doesn’t matter how it works. Or you can go low level and create your own shaders and stuff. That’s the choice developers have today. And I apologize for such long answers, I’ll try to be shorter.

Mobile devices and the future of technology

– No, no, it was very interesting. So we’ve talked about developing interfaces for classic applications, now I want to ask about developing interfaces on mobile devices and the future of development on them. Right now we have a lot of different technologies that allow us to develop mobile applications, whether it’s Xamarin, some native tools, JavaScript, Kotlin/Native, and many others. What do you think about the future of the mobile world, which technology will succeed? Which technology should be used now?

It’s hard to answer that question. I’m not sure I have any recommendations for people for the reason that everything has a downside, and people all have their own opinions. I’ll give you an example : the Apple iOS community are just incredible fans of the native APIs they use to create a flawless user experience and spend years developing such apps. The problem is that not everyone has the time, money, knowledge or need to do it. Yes, it would be ideal to always develop something with this approach, but it’s not always possible or necessary. And in those cases, technologies like React Native come to the fore. It’s a great option if you’re a web developer with JavaScript experience. I haven’t used it much, but it has a good interactive mode. Even though Swift and Kotlin are great tools, you don’t always have access to developers working with those tools. Or, for example, in third world countries, few people have Macs, so cheaper computers are used. You develop an app using those computers on Android and cross your fingers that someone creates a version for iOS.

There are many limitations and tradeoffs for choosing any one thing. I think all of these technologies have done a good job for their development, and they are all working to expand their capabilities. I haven’t had a chance to use Kotlin/Native, but Flutter, for example, is a great example of the "write once and run anywhere" approach, but of course that approach has a downside. It’s not quite native, it just looks that way, plus the integration doesn’t always go smoothly, but that’s enough for many people. I think another important question is, do you have code that needs to be reused? And it’s going to affect a lot of your decisions. Let’s say you have some code in Java, in .NET, or in Objective-C, your decisions are going to be more about frameworks than some practical issues.

I don’t know if any particular technology will come out on top, for the reason that the development world is incredibly large with many different needs. .NET tries to provide access to both the low level and the abstraction level. I will honestly say that if I had a small budget, I would do something similar to what Flutter did: I would create my own rendering system. Not because I like it, but because the users like it, even more so the developers. The problem I see is that such a Flutter system would require a very large development budget. I don’t think we’ll see any Flutter clone anytime soon, and Avalonia is probably the closest thing to it at the moment, but it will take years to solve the problems that Flutter has already solved.

In any case, I’m afraid I can’t give any clear answer. Naturally, I have an emotional attachment to .NET, I think it’s a great technology. However, everyone’s needs are different, and even .NET doesn’t meet all users’ needs. There is no single solution that will work for everyone, but we will still continue to see blogs with the message that we should use one technology or another.

I want to say that I personally love writing code in Swift myself. It’s a very interesting language. The only problem is that SwiftUI, which I like, only works on Apple platforms, so it’s not suitable for most of the world because it uses Android. But it’s an interesting thing, and when I have a free evening, and if I’m not watching something on Netflix, I like playing with Swift. I’m sure there’s some kind of Java analog, but as you may have realized, I don’t use Java.

– Isn’t the SwiftUI story, which only works on Mac, like 20 years ago .NET only worked on Windows, and you were one of the people who made it work on Linux as well. Someone should take it from Apple and port it to Android and all the other platforms….

– I think someone has to do it, but it won’t be me. Partly because I have three kids, and sometimes they don’t give me a break. It’s hard enough to combine working at Microsoft with three kids. It has to be someone under 30 who doesn’t have children to be able to fully focus on a project like this. But it’s a great idea. In fact, if anyone wants to start working on it, I have a couple of ideas on how it could be done, and I’d be happy to teach someone, only I won’t be doing it myself. Plus, it’s a pretty long process.

There’s a really good thing called SwiftWebUI. Somebody just made an implementation of SwiftUI for the web which generates HTML. It could be used as a basis in the sense that someone has already figured out how this engine works, and bind it to something else to output a picture to Android. If I had done it myself, I would probably have used Flutter as the engine, since its rendering system is closest to what SwiftUI offers. Not exactly a perfect solution, but it would help to find the right way.

As I said earlier about Flutter and Avalonia, these rendering engines are much more complex and expensive than people used to think. If I were in the business of porting SwiftUI, I’d take SwiftWebUI and combine it with Flutter to create a cross-platform solution. But again, I don’t think there’s a need for that, a lot of people have enough React Native.

Official Xamarin Forms on all platforms

– Now that we’ve talked about different UIs, there’s another area for discussion in .NET. Right now we have Xamarin Forms as some sort of official cross-platform solution from Microsoft on .NET, we also have Blazor and MAUI. Is this another attempt by Microsoft to create solutions for different segments, or will they be combined in some way?

Good question. I find the Blazor software model very interesting. Microsoft has done a pretty good job of it. It is designed using a server application model, saves states and is a good solution for interactive applications. It came out of a desire to create technology that would be easier for web developers to use. And I think it’s amazing that something that started out as an experiment has gained a lot of popularity among web developers. I’m very impressed with the speed of its growth.

Then there was the version of Blazor that relied more on the client than the server to do the application logic. If you have a million or more users, it’s better to have the application run as much as possible on the client side instead of building up the server capacity. Now, such a version has been implemented on Web Assembly, and it’s also relatively popular. It’s all available in .NET 5, which means you have the choice of whether your code runs more on the server side or the client side.

Blazor came about as a result of an experiment that gained popularity and subsequently became a product. That’s where everything comes from: from ideas, experimentation, and trying new things. If people like them, they keep working on them. You can see some of these experiments, like Kubernetes support. We’re experimenting with a lot of things, some of them will get support, some of them won’t. Blazor was one of the things that got support.

Xamarin Forms and MAUI. The former officially works on iOS and Android, and unofficially it’s available on Windows, Mac, WPF, and so on. But these are all unofficial ports. MAUI is an attempt to make all these ports official, and in the process MAUI decided why don’t they take the problems of some APIs and make them better. You can think of MAUI as the official version of Xamarin Forms on all platforms. And that may not be accurate yet, in the future we will try to improve some of the APIs. On the one hand, if we improve them, they will obviously get better, but on the other hand, if you decide to change the API, you’re essentially restarting the whole ecosystem. You need component vendors, you have to work on new versions, people have to port their code and things like that.

In simple terms, MAUI is the same as Xamarin Forms with official support for all platforms. If I remember correctly, the original plan was to drop the Xamarin name, namespace, assembly to make it clear that it was cross-platform and part of .NET. And then we realized that a lot of people would have to rewrite their projects because of that, and it might cause some difficulties for people. It will definitely support all platforms, the only question is whether we’re going to break the work people have done or keep it simple.

Working at Microsoft, current interests, and the .NET Foundation

– We don’t have much time left, and I’d like to discuss your life after joining Microsoft. What exactly are you doing now? What is your experience with Microsoft as someone with a long Linux background, since many people from the Unix world are not so fond of Microsoft? Don’t you have any mixed feelings about it?

As you know, Microsoft is very fond of Linux right now. I don’t know how confidential it is, so I won’t say much, but Azure is very popular for Linux deployments. It’s been a major component of its growth. We also got the Surface Duo, a Linux-based phone, and on top of that we have large teams working exclusively on Linux. In fact, the core OS team is working on both Windows and Linux. You could even say that we are now one very big Linux company. Yes, we develop and sell Windows, but Linux as a force of nature is not going anywhere. In addition to that, we have Azure Sphere, an embedded OS for security, it’s completely on Linux.

So it’s perfectly normal to like Linux and work for Microsoft. Even though I use Linux, all of my work machines are on Macs, I’m literally surrounded by them at home, except for a few Linux virtual machines. So yeah, that’s totally fine.

– Speaking of Microsoft and your work there, you mentioned that you are no longer involved in the .NET platform. Could you tell us what exactly you’re doing now? Unless, of course, it’s confidential.

You know, once you’ve worked with .NET a little bit, you’re forever a supporter of it, and I still work with the team that’s planning and tracking the progress of .NET. For the last year or so, I’ve been working on AI and AI execution environments. If you try to follow the development of F#, you can see the work we’ve done with deep learning in F#. It’s interesting because in deep learning there’s a thing called back propagation that you have to work through in order to train a model. For that, deep learning specialists need to work through a certain process, for which there are a couple of solutions in TensorFlow and PyTorch. But if you’re a software developer and you decide to see how these solutions work in this case, you’ll immediately think that you shouldn’t do it that way, let it work.

In F# we’ve been working on a feature that allows you to do exact automatic differentiation without using things like this. With F#, you can do automatic differentiation in a very robust way, in addition to doing work that many people have trouble with. It has to do with the optimizer and the fact that it has to be tuned and tweaked for better learning. There are a lot of optimizers out there, and I would even say it’s kind of like duct tape, as people immediately change the settings if something isn’t working right.

We believe that by using direct automatic differentiation, we will be able to train the optimizer with F#. That is, not only the model but also the optimizer itself will be trained in the process for better performance. This way we can reduce the time required to train the same models to the right level. This is an incredibly interesting direction we’re working on right now. It’s being led by Don Syme.

In machine learning, your data is represented as, say, a 1024×1024 image divided into three channels (red, green, and blue) to which various transformations and things like that are applied, and learning happens as a result. All of this converts such huge arrays into different shapes. A big source of the problems of machine learning for humans is that the shapes get corrupted, the convolutions get corrupted, and the different transformations get corrupted, which in turn leads to errors. And there’s nothing people can do about that: the runtime won’t give us a "wrong form" error, no, instead we’ll get garbage that we’ll spend all day trying to figure out whether there was an error in the algorithm, whether there was an error in the data, whether there was an error in the forms. We won’t get an error, because all the transformations were successfully done, just some of them resulted in garbage.

Don Syme is just working on form validation in the IDE, which can help with that. In other words, we can make the IDE check the forms, and when we hover the mouse pointer, we can see the resulting forms. We can work not only with static forms (1024×1024, for example), but also make operations with them, draw conclusions and get errors from them. Let’s say we have a 7×7 convolution, and as it turns out, the model will not work with that size of image without alignment. So we will be able to catch problems like that.

That’s what I’m working on at the moment. That’s all public information, there’s also information that I unfortunately can’t divulge for a few more years, but I can tell you that all the work is going on in the AI and AI execution environments.

– Then we’ll just meet in a few years and you can tell us about it.

Before you joined Microsoft, you had a period in your life when you were part of the .NET Foundation board of directors. What was the role of the board and your role on it, what were your goals on it?

The first foundation I was involved with was the GNOME Foundation. Foundations like this provide a public forum for companies to talk to each other. When we created the GNOME Foundation it was subject to U.S. antitrust laws, for those reasons a foundation was needed. Microsoft has been trying to set up different foundations for the .NET and open source direction for many years. I don’t know if you remember the Outercurve Foundation, which I think I was also a member of. In any case, there were several foundations, most of which were controlled by Microsoft, and they were not exactly non-profit.

I didn’t like the structure of the .NET Foundation a few years ago, and I think my parting gift to that foundation was to reshape it to be more like the GNOME Foundation. More specifically, it means that anyone who has contributed, whether it’s code, documentation, QA, support, and so on, who feels part of the .NET community, can apply for membership. Membership gives you the right to choose the board of directors who guide the development of the foundation. It was once just a foundation, set up out of a desire to have one to promote the technology and completely controlled by Microsoft. It was done in your spare time, people didn’t pay constant attention to it.

So, the last thing I did as a board member of the foundation was to change its structure. Microsoft appointed me to the board as an outside member of the organization even before I joined them. Certainly a great honor, but I wanted it to be a reflection of the needs and desires of the community and to be a foundation that would be owned by that community. After the changes, Microsoft could only appoint one member and had some influence over license changes. For example, we can’t take all the .NET code and shut it down, Microsoft has retained that right. But with that exception, all other members are now elected. I decided to keep my position at the foundation rather than run for election, because I thought I had some advantage at the time, that it was time for the new members of the foundation, and I would be happy to help them and teach them everything. I think they are doing a good job, and the foundation is now open, just like the GNOME Foundation. That was my contribution : taking apart the existing foundation and making it more open. I don’t know if you contributed with code or something else, yes take even this conference, you should definitely apply for membership if you are not already a member of the foundation.

A little bit personal

– You have an incredible amount of development experience, you’ve worked with a lot of different technologies and companies, maybe you have a couple of smart thoughts or tips you’d like to share with the audience?

I think I have two recommendations. There are two books that have helped me a lot, and a lot of things people probably know intuitively. As developers, we know that once you start getting good at writing code, it’s very addictive. You get to a place where you’re writing your code, creating new worlds, making things work-it’s amazing. And we kind of know where it’s all coming from, and we go with the flow. It turns out that one psychologist studied this behavior and found out that what makes you a good programmer can help you do pretty much anything well in life. His book is called Psychology of Flow. . The author explains how people get better at something and tried to understand why people get bored, what made them upset, made them happy, and the like. He found out that for every activity there are ways to make yourself passionate about that activity, it’s called the concept of flow, and he explains in detail how that flow works. I found his book very helpful, and if you’re feeling unhappy or frustrated for any reason, it can come in handy. Plus, the concept is useful not only for programming, but in everything else, whether it’s cooking, sports, reading, yes for just about everything.

The second book is called " The Art of Possibility " which is about a man who works as a conductor for the Boston Philharmonic. I happened to meet him at a conference where he was scheduled to speak at 1:00, I think, but never showed up. And after dinner we were told right before we were about to go to bed that he had arrived and was going to perform now. It was a stunning presentation that changed my views on what was possible and what was not. I think you know that software developers are sometimes quite pedantic, because we learn to deal with compiler bugs one at a time. We have to be very precise in our communication, we try to find flaws in everything because we think like compilers. This is not very useful. Yes, this skill is useful when writing code, but not particularly useful in life or when planning a project. "The Art of Possibility" is a great book that makes you rethink your approach to problem solving.

These are the two recommendations I’d like to give to young developers, and anyone in general.

– Thank you, that was very interesting. We have a little more time, and I have a little question that might surprise you. We know you’ve developed a lot of software, run or managed teams, but what are your hobbies?

Oh, I love writing code! Just kidding, just kidding. I have three kids ages 4 to 10, so I try to spend as much time with them as possible. It’s amazing how quickly it goes away : the oldest daughter is already acting almost like a teenager, arguing with me, resenting me. But it still feels like she was born yesterday. Time flies, and I try to give them as much time as possible. Spending time with them is one of my hobbies, and in the evenings I try to write code.

This interview is from the December conference – and April 20 to 23 will be next DotNext There, too, you can learn a lot about the present and future of .NET, non-trivial problems, and best practices. And there will also be speakers who are prominent in the .NET world – for example, Pavel Yosifovich and Raphael Rialdi You can see all the speakers and reports with descriptions on the website.

You may also like