• Sharing dynamic Objects Between Assemblies in C#

    I had a method in one assembly within my project namespace that accepted a dynamic as a parameter in the method. Something like this:

    using System;
    namespace MyAssembly.Data {
        public class MyDataClass {
          public IResult Parse(dynamic input) {
              // do stuff…
              var data = (int)input.data;
              return new ClassThatImplementsIResult(data);

    I then attempted to test this method, by passing in a dynamic created by the test:

    using System;
    using MyAssembly.Data;
    namespace MyAssembly.Tests {
        public class MyDataClassTest {
            public void Parse_Should_Parse() {
                var instance = new MyDataClass();
                var result = instance.Parse(new { data: 1 });
                Assert.Equals(result.ID, 1);

    But when I ran the test I got this exception:

    ‘object’ does not contain a definition for ‘data’

    This is because of the way dynamics are generated under the hood. They cannot be shared between assemblies because they obey the normal rules of access control, and the types generated by the compiler for the dynamics are marked as internal. To get this to work, I had to add the following line to the AssemblyInfo.cs file of my Test assembly (basically think of what other assemblies you want the current assembly to share internals with):

    [assembly: InternalsVisibleTo("MyAssembly.Data")]

    And after that everything worked peachy! For other gotchas related to dynamic objects in C#, check out Gotchas in dynamic typing from C# In Depth.

  • Why You Should Not Use Medium for Your Personal Blog

    Clickbait articles, hyperbolic statements, fresh graduates from 4 month code camps that declare that all current programming languages are dead, painfully un-subtle product plugs. Medium articles are everywhere now, and not a day goes by where at least two or three of the damned things show up on the front page of Hacker News or any other site with collections of programming articles. They have become formulaic to the point of parody, and I think you are doing yourself a disservice as a tech/programming blogger by hitching your wagon to the sleek green and white machine. They are often a platform for people who have a lot to say on twitter, but very little substance to write actually useful articles.

    I know, it is hypocritical to slam content-less clickbait articles with a clickbait-y article of my own, but there is a point. I’ll try to stop here with the actual bashing of Medium and go into some real pros and cons of running your own blog vs. hosting on Medium.


  • The Fundamentals of Flow in 10-ish Minutes

    I’m still kind of undecided on the best way to add type checking into JavaScript. On the one hand there is Typescript, which a lot of people seem to be going toward. It’s a superset of JavaScript, and it adds a LOT of new language features, as well as compile-time type checking. It’s backed by Microsoft, and has widespread support in other projects like AngularJS. It’s easy to add to an existing project, and having used it briefly myself the ecosystem behind it, including typing libraries for projects like Lodash and React, and the benefits it brings are outstanding, and I really feel like this will be the future of JavaScript.

    On the other hand there is Flow which was created by Facebook. It’s also a compile-time static type analysis tool, and like Typescript you can add it gradually to your project by adding // @flow to the top of .js files that you want to perform type checking on. Flow doesn’t aim to add a lot of new language features like Typescript does, but rather it attempts to ensure correctness in your JavaScript code using the type analysis. Here’s a good comparison article between the two if you want to do some further research http://michalzalecki.com/typescript-vs-flow/.

    I watched this video a little while back by Alex Booker (@bookercodes), a developer on Pusher, that serves as a great introduction to Flow. Check it out below, and there’s a similar type of video on the Typescript website.

  • My Writing Blog

    Just a little plug for my writing blog, where I am posting quotes, essays, short stories, and personal blog posts not related to tech. Check it out at https://writing.martin-brennan.com.

  • Terminal Shortcuts

    A friend sent me this list of useful terminal shortcuts:

    Ctrl-A: go to the beginning of line
    Ctrl-E: go to the end of line
    Alt-B: skip one word backward
    Alt-F: skip one word forward
    Ctrl-U: delete to the beginning of line
    Ctrl-K: delete to the end of line
    Alt-D: delete to the end of word

    If you are using windows, I can’t recommend cmder enough. It is miles ahead of other terminals, and you can use bash, and Powershell inside it too.

  • JSON Schema

    When writing a common data transfer format, you will need a strong schema or specification so each client that uses that format knows how to parse, validate, and construct data using it. In XML you can use XSD, which is used to specify validation rules and elements expected in an XML file, as well as specifying the type of data expected (strings, integers, dates etc.). When using JSON, the best way to achieve this is with JSON Schema, and I’ll give a quick run through of how to use it and the things you can do with it in this article.


  • Why xUnit?

    I was fairly recently introduced to XUnit by a work colleague, and I now prefer it over the default Microsoft Unit Test project format. In fact, I feel kinda dumb for having used the default for so many years when there are so many alternatives, like xUnit, NUnit, and Moq. Granted I haven’t tried to use the other two, I thought I’d write about what I like about xUnit over the default MSTest framework.


  • Easy HTTPS With Let's Encrypt

    Well, since my last post on HTTPS I’ve gone and put an SSL certificate on my webserver and forced HTTPS for all connections to this site. I decided to do this tonight while I was doing some other poking around over SSH, and found that it was even easier to set up than I thought it would be. To accomplish this, I used Let’s Encrypt, which issues free SSL certificates so more websites can be served over HTTPS. For those not in the know:

    Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit.

    First of all, I headed over to the Getting Started page of Let’s Encrypt, which sends you to the certbot tool. From here, you choose a web server and an operating system and you are given detailed instructions on how to install the certificate. Basically, certbot fetches and deploys SSL/TLS certificates for your webserver, which you then set up your webserver to use. For example, this site is running on an Ubuntu DigitalOcean droplet with the ngnix webserver. I found the guide How To Secure Nginx with Let’s Encrypt on Ubuntu 14.04 extremely helpful to read through as well in this process, and it also gives you a link to the SSL Server Test which will let you know if you’ve done everything okay.

    Overall, I found the process easy, and the guides and tools provided are top-notch. The people behind Let’s Encrypt have done a great job, and I’ll be recommending it to colleagues who are looking to do the same thing. You can now rest easy reading this blog and knowing it was written by me, Martin Brennan, and not the Illuminati injecting subliminal messaging driving you to BUY CONSUME REPRODUCE ▲.

  • Google Chrome to Start Marking HTTP Connections Insecure

    Google Chrome has been steadily marching toward this end for some time now. From January 2017, Google will start flagging pages served over HTTP as Not Secure. The way that this will work in Chrome is that an indicator will be displayed in front of the address bar like they currently do with websites served over HTTPS with an invalid certificate. This will only be done on pages with credit card or password fields, which should have been served over HTTPS in the first place anyway. Firefox has already adopted this behaviour, and their main reasons for doing this, along with the Chrome team, is that it prevents MITM (Man in the Middle) attacks.

    Though this approach is sound, today I was thinking about the impact this may have on regular users who may or may not be aware of the different indicators in the address bar, or be aware of the concept of HTTP/HTTPS or secure/insecure sites. Will they be alarmed that more sites suddenly have red “error” messages in the top bar? Will this behaviour, important though it may be, detract from even more pressing security issues, such as expired or invalid certificates, creating a “boy who cried wolf” situation? It would be really interesting to observe a regular user of Google Chrome and see what they do in these situations. Nevertheless I’m sure the Chrome team has thought of this, and will hopefully have helpful documentation for the lay-user.

    I’m still not entirely sure that serving static sites, such as this blog, over HTTPS is worth it. I do not have any pages that accept credit card or password details. Though many would probably argue this point, and scold me for not realizing the importance of it, that my reliance on third party services like Disqus for comments and Google Adsense for ads would open up attack vectors if I serve my main blog as regular HTTP. I understand what proponents of making the web HTTPS are trying to achieve, to make passive eavesdropping by government agencies and malicious hackers much more difficult, and I do believe that at some point all pages will be HTTPS. As I saw it described today, if one person wears a mask they are still suspicious, if everyone wears a mask it becomes the norm. I just struggle to find time to set up an SSL certificate, though services like Let’s Encrypt may ease the pain of doing this, it’s just very low on my list of things to do. Though that platform is not without criticism, i.e. they do not offer free wildcard SSL certificates, I believe it is still a step in the right direction to get more people and websites onto HTTPS.

    Google is also said to be pushing encryption as a factor in their PageRank algorithm, so it soon may become much more important to have a site served over HTTPS to stay relevant in search results. If anyone has any advice or thoughts on the matter, or whether it is important for me to do it on this blog, please let me know in the comments below!

    If you want to read further on the subject, Jeff Atwood wrote a great article on the topic called Let’s Encrypt Everything.

  • ng-stats AngularJS Profiling Tool

    I found a useful tool for profiling AngularJS applications last month, it’s called ng-stats. Here is what the tool is for, taken from the GitHub page:

    Little utility to show stats about your page’s angular digest/watches. This library currently has a simple script to produce a chart. It also creates a module called angularStats which has a directive called angular-stats which can be used to put angular stats on a specific place on the page that you specify.

    Here’s what it looks like when you run it using the bookmarklet:

    The first number is the number of watchers on the page (including , $scope.$watch, etc.). The second number is how long (in milliseconds) it takes angular to go through each digest cycle on average (bigger is worse). The graph shows a trend of the digest cycle average time.

    It’s really great to watch this while you click around your app to see where the hotspots are, and places where memory management could be improved. However, you should keep in mind that in-depth profiling will be required if you want to really see where the problem spots in your application are!

3 // 10



Want to read regular updates? Subscribe via RSS!