bookmark_borderWhat does question mark and dot operator ?. mean in C# 6.0?

C# 6.0 introduced new operator – null conditional operator ?.

The way it works is very simple. It checks if the left part of the expression is null and if it’s not it returns the right part. Otherwise, it returns null.

Simple example:

var thisWillBeNulll = someObject.NullProperty?.NestedProperty;
var thisWilllBeNestedProperty = someObject.NotNullProperty?.NestedProperty;

Executable example:

//Rextester.Program.Main is the entry point for your code. Don't change it.
//Compiler version 4.0.30319.17929 for Microsoft (R) .NET Framework 4.5

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;

namespace Rextester
{
    public class Person
    {
        public string Name { get; set; }
    }
	
    public class Dog 
    {
        public Person Owner { get; set; }
    } 
    
    public class Program
    {
        public static void Main(string[] args)
        {
            Dog dog = new Dog();
            Console.WriteLine(dog.Owner?.Name == null);
            // this will print True
            dog.Owner = new Person() { Name = "Fred" };
            Console.WriteLine(dog.Owner?.Name); 
            // this will print Fred
        }
    }
}

The above example can be executed here: https://rextester.com/TBC19437

Documentation: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators–and-

bookmark_borderDaily at 9 AM

A software house is not a convenient store. The work can start pretty much at any time. What’s important is that people who perform the tasks together can communicate, meet, and share their thoughts. This flexibility is important because the chronotype is said to be like a human height – you can’t change it without breaking a bone.

What is chronotype?

Chronotype is being a night owl or the morning bird – it’s the preference of the organism to wake up early or stay late. Apparently, in traditionally living tribes it breaks down so that about 25% of the population likes nightlife, 25% morning, and the rest something in between. This would serve the survival of the group because at every moment of the night and day there is someone who watches.

So we have Mark, who gets up at 5 AM and Bill, who falls asleep at 1 AM. Theoretically, they both have flexible working hours and must meet from time to time on a call.

Then scrum master sets up the daily scrum meeting at 9 AM.

Mark is cool, he comes at 7 AM, he will eat breakfast, he will poop, he will browse cat pictures on the Internet and he is ready and fresh for confession from yesterday’s tasks.

Bill, on the other hand, gets up sleepy every business day, because he would like to come at 12 and cannot sleep earlier. Frustration grows, and IQ decreases. He sleeps well only on weekends and holidays. Every morning he’s in a hurry, sleepy at this damn daily and the whole day too. In addition, the team looks at him crookedly. Lazy, late, sleepy, cursed black sheep!

Meanwhile, research suggests that the night owls are more intelligent, not lazy: https://www.psychologytoday.com/us/blog/the-scientific-fundamentalist/201005/why-night-owls-are-more-intelligent-morning -larks

This seemingly unimportant organizational habit – organizing daily at 9 AM – drives the significant part of the team inefficient. How much of it depends on luck, but statistically around a quarter. There’s no reason why daily meetings can’t be scheduled at the end of the day or in the middle of it. It is simply a habit, a ritual that has spread and is mimicked in a reckless way.

To sum up: when organizing a scrum team, it is worth paying attention to the preferences of its members regarding the hours in which the meetings are to take place. This also applies to meetings at around lunchtime, when some people may simply die of hunger because of discussions lasting several hours.

Early daily scrum meetings can reduce the performance of the team as long as any night owls are the members. It is worth remembering.

bookmark_borderSenior developers’ greenfield

You have a million dollars to spend and an idea for a product. What would you do?

You would recruit the best people and believe that their experience will make the team effective in creating high-quality software, right?

But…

You come after three months and you see that:

  • the first month passed on discussions about branching strategy, test architecture, deployment, application, cloud selection, CI / CD technology stack and the application itself, choice of linter and code formatting standard and several other similar topics
  • one and a half months it took to implement the infrastructure in the cloud, automatic deployments, application architecture and refinement of tasks
  • in the last two weeks, the team have delivered the header and footer as well as user login and registration

You feel hypertension pressing against the walls of your arteries, your kidneys are itching from adrenaline production, and tooth enamel creaks from nervous tension. However, you also learn that:

  • Max is going to quit because he disagrees with Peter regarding the chosen infrastructure of the application
  • Anne has hardly spoken for two weeks and nobody knows what she is doing, but apparently she is configuring the cloud
  • Half of the code was written by Xi
  • Matt is hated because he does not accept pull requests
  • Mark wants to be a team leader even though you have agreed that there will be no team leader

You return home, open 18-year-old Glenfiddich and with each sip, you become more and more convinced that IT is a swamp.

What went wrong?

Of course, the described situation is exaggerated, but there is a lot of truth in it. There are two problems – greenfield and seniority.

Greenfield is a new, fresh, pristine project – one that is built from scratch, where devs have the freedom to choose technology, practices, architecture, practically everything. This is the dream of many programmers locked in the cages of maintenance of legacy systems.

High seniority is good, valuable, almost priceless, but in everything you need balance, and good is never pure – there is always a flaw in it. The flaw of the senior developer is his ego. Many experienced programmers are convinced that they have seen so much that they think they are always right. The truth is, however, that everyone is sometimes wrong, and that many opinions in IT do not matter – or they do, but the profit is less than the cost of not making a decision.

Gathering many seniors in a greenfield project is a risky venture. The ability to choose technology and architecture inevitably creates a discussion in the first phase of the project. The more seniors, the more frenetic this discussion will be. Juniors or mids will rather adapt – seniors will usually stubbornly defend their opinions. This is understandable, of course, but it is disastrous for this type of project.

A ‘self-organizing team’ is an additional source of the problem. When there is no official leader, there is a fight for power. When there are no obedient sheep, because we selected only old stagers – we have a problem.

Seniors devs’ greenfield is a synonym for failure. During the selection of the team members for the greenfield project, let’s not be tempted to choose only experienced professionals. The selection of an official leader is also a good idea – this will speed up the building of the hierarchical structure, reduce the struggle for power and give clearly defined responsibilities to team members.

bookmark_borderThe naming problem in programming

Reading source code is more difficult than writing it. Inherently. Everybody who worked with a legacy system and needed to understand the authors’ intent knows what I mean. Browsing through thousands of lines of function definitions, variables and figuring out what’s the point is a daunting task.

But why, actually?

Is reading a recipe difficult?

Is it more difficult than writing it?

Is reading a hundred years old recipe more difficult than understanding thirty days old recipe?

I don’t think so.

Why then reading source code gets more difficult as the code matures? What can be the reason for software decay and common hatred towards legacy systems?

Let us think of the action of reading source code. What do we do when we try to understand the flow of the computer program?

We are a bit like computers, but less effective. We go through the code and read the variables and functions. Because we are not as fast as machines we can’t remember the value of each variable and we are unable to memorize the body of each function or subroutine. Our “understanding” is based on approximation. We read the name of the variable and try to guess its usage, its meaning. We read the name of a function and without reading its body we try to figure out what the function can do.

It’s exactly the same as in the real world. When we read about “a carrot” in the recipe we imagine a carrot. We can understand the concept of carrot without receiving a lengthy list of details about the carrot, such as its genetic code, mass, temperature, color and so on.

But source code – even with the entire effort of object-oriented paradigm – is not like the real world.

In the real world we have a huge, but a limited amount of words in our vocabularies. Natural language is processed by human brains in a completely different manner than source code by the compiler. For example “a chair” to a human being is not a particular chair but an idea of a chair. In most cases, we don’t need detailed definitions to process natural language. On the contrary – in rare cases like the law we have big problems with the definitions.

A very good example illustrating what would happen to the natural language if we process it as literally as computers process the source code is this video:

Let’s go back to the process of reading source code by software engineers. We take the name (of function or variable) and guess. As long as the code is “clean” and not old enough to match our instinctive understanding of the definition our guess is somehow correct. In this situation, the process of reading is smooth and painless.

The issues pop up when we can’t guess properly.

But the real issue is far greater. It’s one of the fundamental problems of software development. Partly it’s described by a quote by Phil Karlton

There are only two hard things in Computer Science: cache invalidation and naming things

Phil Karlton

I mean naming things.

Naming things is the point of incompatibility between the world of humans and computers. In the real world, where the language is processed by human minds, which are able to process ideas, classes of objects we rarely invent new words. In the realm of a computer program, we constantly do it.

What happens when we invent a new word in a natural language? We learn it. There comes the new word “computer” and all the people learn that it’s a kind of computing machine. But it doesn’t matter if’s MacBook Pro or new Dell XPS or ENIAC or PC. We don’t have a new word describing computers slightly differently in every company or even the company’s department.

But in the software world, we do. “User” means something different in every single program written so far. In one it’s just user name and surname. In other, it’s also the date of birth. In other, it’s only sex and nickname.

We simply can’t name anything properly in a computer program. Every construct within code does not match the real-world meaning of the world.

We can be too vague – let’s say naming the user “user”, or too precise “userWithNameAndSurnameAndSexAndDateOfBirth”. Almost never we will be able to fully express the object by its name. That’s why reading source code is so difficult. The words, the names of variables and functions never mean what we believe they mean. We always need to go to definition and check. Every time we check we learn the new language of a particular software project. Learning thousands of new words is difficult. Therefore reading source code is difficult…

bookmark_borderDRY is dead

The DRY principle, together with YAGNI, SOLID or KISS, is one of the most popular acronyms which shaped our way of thinking about developing software. It is simple, intuitive and easy to learn even during the early stages of education. However, the principle has been born in completely different circumstances than what we are dealing with today.

Simple idea

I’m not a historian of the software development and I’m not sure how the DRY principle has been born, but I guess it was created during procedural programming age. It stinks with a procedural way of thinking anyway.

The idea is simple. We have some code. The code should be organized. As long as we have some part of the code which repeats here and there we should create a procedure – extract this block of code, give it a name and reuse it.

Time flows

Since procedural programming things have changed. First of all the object-oriented paradigm explosion has happened. The complexity of the software has been growing. The systems for accounting, summing long rows of numbers, generating reports have been already created. The new frontier was internet browsers, instant messaging apps, trading systems for companies and snake for Nokia 3310. Except for the last one – it was quite a challenge.

The DRY principle doesn’t fit OOP as much as the procedural paradigm. Actually, if you think about it – it doesn’t fit at all.

Let’s think for a while – what happens when we try to avoid repetition in object-oriented code? First what comes to our mind is probably inheritance – the beautiful useless idea. The dog has a name, the cat has a name so let’s create a class Animal with property Name. But wait a second. Wild animals don’t have names. Let’s create WildAnimal and DomesticAnimal. Damn! – almost nobody gives names to fish…

Second popular solution for repetition problem is utils or commons.

There’s a secret rule in the software industry – every complex enough project has utils directory or class. Some of them 8k, 16k LoC.

It’s avoidable, it’s possible to properly design object-oriented software without these cancer cells of utils and disastrous inheritance. Please keep in mind anyway that both of them are the result of the DRY principle application. We tried, in the most easy, cheap way, to not repeat ourselves.

Microservices – nail in the coffin

Once upon a time I asked a colleague who worked at Amazon – the company which is a role model, a pioneer of microservice architecture – how do they organize common parts of the project, how they manage reusability, he answered:

We don’t. We do repeat. It’s cheaper and quicker at that scale of the project. 

The enormous size of the systems we are developing nowadays entails a new approach and different rules. The most visible tendency recently is breaking down problems and systems into smaller ones. Actually it is one of the main techniques since the beginning of software development, but recently it becomes more important than ever before. We can spot this trend in front-end frameworks (Angular, React – componentization), as well as in back-end (microservices architecture).

To some extent, we can think of it as a proper way of object orientation, more proper than inheritance. The organisms are similar but not the same. They do not, strictly speaking, share some features. The human eye is not the same as a dog’s or hawk’s eye. Only seemingly, on the level of naming these objects, they’re the same. Implementation details differ greatly. I’m not a genetic scientist but I bet that if we cut out from human DNA the parts which we don’t share with monkeys it will not create a monkey. I guess there are many subtle differences, some small parts of genetic code, few “lines” which make a difference even if most of the code is the same.

What to do?

It seems that the DRY principle became harmful. Should we stop using it? Maybe. For sure we should use it more carefully. In many scenarios, it may bring more harm than good. In some cases repeated code can be signature of failure, in some cases, it may be the best possible solution.

Is it bad when we repeat the identical code twice? If we repeat within the same class – I guess it is; in the same module – maybe; if it’s repeated in the same project, which consists of 100k LoC, and repetition happened in different modules – maybe not.

Is partial code repetition bad? Well, maybe it’s not bad by default, but it depends. Depends on the possibility to create a proper abstraction to avoid it. Quite often we use principles very strictly. Don’t. Don’t follow these rules blindly because they’re merely suggestions.

bookmark_borderHow to delete all documents from a collection in Google Cloud Firestore?

To delete all documents from a Firebase collection in Firestore we need to get the collection, iterate over its elements and delete each one of them:

const db = new Firestore({
  projectId: "projectId",
  keyFilename: "./key.json"
});

db.collection("collectionName")
  .get()
  .then(res => {
    res.forEach(element => {
      element.ref.delete();
    });
  });

bookmark_borderWhat happens when the scrum team gets too big?

Participating in an overgrown scrum team is a fascinating experience. It allows us to observe how the framework collapses under its own weight.

Basically, scrum consists of meetings. Daily stand-ups, backlog refinements, sprint planning, review, and retrospective. These meetings, in theory, can consume even 22.5% of developers’ time. That’s a lot. But as always – it depends.

Image result for office sleep

Observation 1 – meetings get longer

Let’s start with a daily scrum. It should last no longer than 15 minutes. As long as the team is just a few developers big – it’s easy. But when we have 15 devs we can give everyone only one minute to speak. It’s pretty impossible. Either we will destroy the idea of the daily meeting – where the team members can properly describe what they worked on and ask for help (by forcing them to speak extremely short) either we’ll extend the time. In the team of 14 developers, I’ve seen daily scrum to grow up to 25 minutes.

Exactly the same is the situation with every other meeting. But in case of refinements or sprint review, it’s getting even worse because these meetings are naturally longer than daily scrum.

Observation 2 – meetings become full of things you don’t need to hear

As the team grows, the scope of work grows. Naturally – as in any big enough group – the subgroups form up. It can be any kind of division – front-end and back-end devs; people who work on feature A, and another group working on feature B; Java, and JavaScript programmers. When they work on the everyday tasks they almost don’t talk to each other unless they need to discuss some API or contract which is necessary for the parts of the system to communicate. But if the scrum team doesn’t split they’re forced to participate in the meetings together.

Then you find yourself in a meeting where half of the time is spent on discussing the things you don’t need to hear. You’re let’s say a JavaScript developer and last 30 minutes of a meeting was a discussion about back-end, or you’re developing database structure while implementation of the UI of the system is estimated.

Observation 3 – productivity plummets

You want to code. That’s why you became a software engineer. You like to focus, you love to create lines of instructions. This is also why, all in all, you’re getting paid. You automate processes, therefore, people don’t need to be employed to make them manually and the company is more effective. The money saved on automation goes to your pocket. Everybody is happy.

Unless you can’t code.

When the length of scrum meetings get longer it consumes your coding time. When the meetings get boring it consumes your mental energy. You become less productive. Not cool.

That’s why Scrum guide recommends a scrum team to be between 3 and 9 developers. Even if it seems difficult to divide – it’s necessary. The alternative is a small crowd of unhappy, ineffective developers.

bookmark_borderMeeting hell

Another meeting which could have been an email – that’s my favorite nag about a meeting hell.

You know it. You know it well. You just want to sit and code, you want to focus for an hour or two and produce some value for the company, for the customers, for the world. Then it shows up. An Outlook reminder. In 15 minutes there will be a meeting. You’re done. You don’t even try to start work anymore because you know it’s impossible to proceed. Time is too short.

The meeting takes five minutes to even begin. Joe comes from another meeting and it took longer than expected so he’s late. Jane is in the toilet. Mark is always late.

During the meeting, you listen to a lot of stuff which doesn’t bother you at all. It’s just a problem which affects two or three people out of eight or ten gathered in a tiny room with not enough air (by the way – this lack of oxygen can affect you).

Finally, you go out of the meeting and it’s lunchtime. After lunch, you’re a bit sleepy and it takes another 30 minutes until the blood will come back to your brain from the stomach.

Half of a day is gone.

Meetings are necessary. Effective organization of them is rare, however. Probably there are some atavistic mechanisms that make us prone to long pointless discussions. As far as I noticed it’s just a lack of emphasis put on the effectiveness of the process of knowledge exchange.

How can we avoid fluff? In some cases it’s tough. But more often than not we can ask ourselves some helpful questions: 
– does the topic require all attendants to be present at the meeting?
– are we prepared for the meeting?
– do we have an agenda?
– is everybody interested and do they need to attend?
– do we need to make a decision quickly or we can use e-mail instead of a meeting room?

Additionally, I would say that boring meetings are a soul-crushing experience. Therefore I would add a few more:
– is the pace of the meeting not too slow, sleepy?
– is the pace of the meeting too fast, too stressful?
– aren’t we too serious?
– aren’t we too much relaxed and offtopic?
– do we take breaks?
– don’t we have too many people in the room?

I believe that minimizing the number of meetings and effectively organizing them can be an amazing improvement in the workplace of a developer. Mental energy sucked by boring, long, nasty meetings organized in crowded rooms is the most precious asset brought by a software engineer to the company. We should take care of this thing. It creates software. It writes code. This energy runs the business. If we drain it we can have huge problems.