Sunday, December 20, 2009

Software Craftsmanship in Israel

During the last weeks I was busy working on formulating a new user group for Software Professionals.

I am very pleased to invite you to join the group: " Software Craftsmanship in Israel ".

My aim is to discuss and promote software professional skills, through series of articles, posts and monthly meetings. There are plenty of issues to discuss; Education in a technology (explosive) era, Software skills refinement (katas), Agile processes and methods (Refactoring, xUnit Frameworks and Patterns, TDD ...) and of course, Architecture Styles and Design (Selecting the appropriate tools for the requirements).

You are most welcome to join the group!

(Your suggestions and comments will be also highly appreciated.)

Saturday, December 19, 2009

How will you find your next job?

(Cross post from IRefactor)

So, you have decided to move on...

You have updated your CV and sent it to your friends... Then you sent it to a couple of recruitment companies... Then you sent it to all known recruitment companies...

After a while...
You have been called to an interview...and another one... and another one...

Weeks later...
You got a proposal(s) and you have decided to accept the job: The compensations are a little bit better than the existing ones. The issues are the same; After all you are paid to develop more or less according to your experience. Yet, you are pretty much excited about the change; You are going to work with new people and make new collaborations. Though minor, it cannot be dismissed, after all we are social "animals" and changing groups/places has its effect.

Hey, but did you stop to think for a moment whether the next place is going to be a better one? After all, that's the purpose of you moving on, isn't it?

Many posts are dealing with how to hire the best of the bests Software Engineers (and I am not an exception), but this post is dedicated to the searchers themselves.

Here, how you will find your next job:
A disclaimer:
-There are many ways to evaluate a company; I am focusing on more technological/professional evaluations using social media.
-Below is a short list of criteria, followed by my explanations of why those criteria identify a better company.

Learn the company's executives bio:
  • Who are its managers (Company's Web Site)?
  • Were are they mentioned (Google, Techcrunch)?

    • How often are they mentioned?
    • Do they mention their technologies/products vision in a clear way?
    • Can you identify a company's future roadmap?
Search for the company's profile (If not available, search for employees profiles): (Linkedin, Facebook)

  • Look for the current and former employees, are you familiar with any?
  • Look for the current employees profiles:

    • What is their average experience?
    • Do they have any professional blogs? If yes, then:

      • Do they deal with Software Architecture and/or Design?
      • Do they discuss innovative ideas in terms of Software Engineering or Software Management?
      • Do they contain any posts explaining company's software application development decisions?
      • Do they emphasize/teach how things are being done inside their company?

    • Does any of the employees lecture (occasionally or regularly) on Software Engineering topics? (Given that the company isn't a professional training company)
    • Does any of the employees contribute to an open source? ( Linkedin, Google, Codeproject, Codeplex)
    • Does any of the employees attend (occasionally or regularly) any professional conventions (Linkedin Events, Linkedin Groups, Google Groups, Twitter)

Search for the company's additional activities:


And here is why, in my opinion, the above criteria will help you to identify the better companies:
  • Good executives will create a lot of buzz around their company, either by being cited by others or by expressing themselves through press, articles, blogs or tweets (A good example is: Joel Spolsky and his columns here: joelonsoftware).
  • Though this is expected, clear definition of the technologies/products with consistent and proven history of achievements, symbolizes higher success of the company's roadmap (and future).
  • Moreover, such executives, usually create a supportive climate where Software Engineers thrive. In such a climate, Software Engineers are driven by mutual success in terms of products and technologies.
  • Good Engineers, in their turn, usually blog or tweet about their professional experience.
  • Real good Engineers not only discuss a specific technology, but also discuss much wider aspects like Architecture, Design and Software Management. You will have a lot of fun learning and working with such, especially if you spot the following concepts in their posts: S.O.L.I.D principles, Test Driven Development (TDD), Unit Testing, Continuous Integration and Static Code Analysis (also here). Those are the signs of people who care about high quality products!
  • The best Engineers contribute to an open source. They spend their spare time coding and refining their professional knowledge (also here). Not only they enhance themselves, but they also contribute a great deal to others (that's the beauty of the open source). Be sure, they will also contribute to your knowledge and skills when working with them.
  • Thus, being technologically thirsty, those Engineers will attend professional conventions and events and eventually will drive their companies to support such activities.

Remember, it may seem like a long and tedious investigation, but it pays back, if you really aim to find a great place to work in.

Friday, December 4, 2009

Experts Days - 2009

This week we concluded the Experts Days.

The sessions were effectively organized (by Eyal Vardi from E4D), the audience was amazing and the atmosphere was energizing .

Here are the headlines of my sessions:

I. What's new in .NET 4.0 and VS 2010:

The main focus was to emphasize the most important (in my opinion) upcoming features.

Here they are:

  • Task Parallel Library, PLINQ and Coordination Data Structures
  • Code Contracts
  • Reaction (Rx) Framework
  • Managed Extensibility Framework (MEF)
  • Dynamic Language Runtime (DLR - and especially F#)

Also, we reviewed a lot of other upcoming features in .NET 4.0 and Visual Studio 2010 (IDE).

For those, willing to have a deeper look, here is a recommended book.

II. What's new in ADO.NET 4.0

The main focus was to discuss many improvements to the Entity Framework.

In my opinion the newly added capabilities finally bridge the framework to be a handy ORM tool.

Here are the main features we reviewed:

  • Model First Development
  • POCO, Self Tracking Entities and T4 support
  • Code Only
  • Lazy Load
  • IObjectSet and virtual SaveChanges

We also delved into WCF Data Services (previously ADO.NET Data Services) with the current and upcoming features.

We saw how easy it is to break"data silos" using RESTful interfaces.

The main upcoming features are:
  • Support of a JSON format
  • BLOB Streams improvements (Media resource and Media link)
  • Query enhancements, like: projection, row count & inline count
  • Server Driven pages (i.e. server throttling)
  • Feed customizations
A good book to enhance the knowledge can be found here.

Last, but not least, we discussed the Velocity project identifying what are the architecture decisions that should be made in order to utilize a distributed cache.

III. Design Patterns

We made a walkthrough through the catalog of design patterns, like: GRASP and GoF. Utilizing an example, which was given in the class, we applied GRASP "decomposition" patterns to identify main entities and then used GoF "composition" patterns to allow those entities to communicate together in order to meet their requirements.

Finally, we reviewed S.O.L.I.D principles.

It definitely was fun !!!

Saturday, October 24, 2009

DTOs, Business Entities and Persistency

(Cross post from IRefactor)

When designing an application, one can easily confuse DTOs, Business Entities and Persistency.

Using the following simple examples I will demonstrate design considerations and thoughts that will dispel some mists.

Consider that you want to represent an audio or a movie located on a web page.
When you visualize such an object, the first thing you probably see is the data that characterizes it. No surprise here. You applied, without knowing it, an "Information Expert" pattern when you visualized the object's knowledge responsibilities.

So, what are they? Clearly such an object will have:
  • Uri - object's URL
  • Content - object's content
  • DateCreated - object's creation date
  • Etc...
I will name this object a "Resource", since it represents a resource being located on a web page (be it a movie or an audio).

But wait... here you go; Meet the DTO object. A DTO is a simple container for a set of aggregated data that needs to be transferred across a process or network boundary. It should contain no business logic and limit its behavior to activities such as internal consistency checking and basic validation.

The Resource object is rather simple, it's a mere placeholder for the object's attributes and as such represents a DTO.
Now, let's say that a Resource object should be persisted to some store.

Basically, we would like to introduce a "Save" functionality that will persist a Resource object. The question is: Who's responsibility should it be? Using the Information Expert pattern again will reveal that functional responsibility (as a "Save" operation) is a sole responsibility of a class with the most information required to fulfill it. Hmmm... Isn't it a Resource class? As a Resource class "knows" exactly the information it wants to persist, it implies that the Resource class should have the "Save" responsibility.

I know... I know... Didn't we say a minute ago that a DTO shouldn't contain any business logic? By adding a Save responsibility to the resource class we turned it into something else. We turned the Resource into Business Entity. A business entity is an object that is responsible to solve domain requirements by collaborating with additional business entities.

Let's see what are our design options for creating a Resource business entity.

Option I: A Resource business entity inherits a ResourceDTO.

Though it's tempting to select the above solution, you shouldn't take the path (unless the requirements are very very simple). Remember; One of the principles of object oriented design is to favor object composition over class inheritance. There are many reasons for which the principle holds, but I am not going to discuss all of them here.

It does not make sense that Resource "is-a" type of ResourceDTO; You are not going to transfer the Resource object across boundaries. Moreover, such a design may violate the Single Responsibility and Open Closed Principles. The "Save" method is going to be bloated with a specific logic of how to connect to a certain store - which is clearly not Resource's responsibility. Changing a store, having more than one store or changing what to store will require Resource modifications - which violates the OC principle.

Option II: A Resource business entity passes a Resource DTO to a specialized Resource data access entity.
All the specific logic of data access is captured in a specialized ResourceDal object. Any change in store, will change only the specialized object. The ResourceDal object will receive a Persistent Resource DTO object, thus decoupling the Resource entity from the persistent information (which can vary easily later on). Such a design is superior as it allows to inject the correct data access object to the Resource entity (and it also means that it will be much more testable). Also, it allows to have multiple DTOs, for example: one for persistency and one for visualization.

In order to derive a PersistentResourceDTO from a Resource entity I will use an AutoMapper library.

AutoMapper uses a fluent configuration API to define an object to object mapping strategy, thus allowing easy transformations.
Here is the PersistentResourceDTO declaration:

[Serializable]
public class PersistedResourceDTO
{
public Uri Uri { get; private set; }
public byte[] Content { get; private set; }
public DateTime DateCreated { get; private set; }
}

Using AutoMapper library is especially easy when the properties and methods of the source type match to the properties of the destination type - this is called flattening.
Thus ExtractPersistentDTO method's implementation is simple and coherent.

private PersistedResourceDTO ExtractPersistentDTO()
{
// creates a mapping between Resource and persistent Resource DTO.
Mapper.CreateMap<Resource, PersistedResourceDTO>();
// flattens the current Resource object into PersistedResourceDTO.
PersistedResourceDTO dto = Mapper.Map<Resource, PersistedResourceDTO>(this);

return dto;
}

AutoMapper library also allows to project a mapping. Projection - transforms a source to a destination beyond flattening the object model. Consider that a Resource also has a collection of keywords. In order to visualize the Resource we want to use the following VisualResourceDTO where the title will be a combination of all the keywords.

[Serializable]
public class VisualResourceDTO
{
public Uri Uri { get; private set; }
public string Title { get; private set; }
}

Here is how to do it using the AutoMapper:


private VisualResourceDTO ExtractVisualDTO()
{
// creates a mapping between Resource and Resource DTO
Mapper.CreateMap<Resource, VisualResourceDTO>()
.ForMember( d => d.Title,
op => op.MapFrom(resource =>
string
.Join(" ", resource.Keywords.ToArray()))
);
// projects the current resource object into VisualResourceDTO
VisualResourceDTO dto = Mapper.Map<Resource, VisualResourceDTO>(this);

return dto;
}

(Remark: In the post above I didn't discuss other approaches, like ORM solutions).

Sunday, October 4, 2009

Notes on C++/CLI

(Cross post from IRefactor)
If you asked for my opinion whether to develop an application using (unmanaged) C++, I would strongly advise you to reconsider.

Unless you deal with real time applications (or near real time applications), you should better utilize the managed world. Sure, there are times for old good C++; especially when the application's memory footprint is an issue, but needless to say, you have better chances in productivity, ease of development and maintainability in the managed applications (C#, Java and etc...) than in the unmanaged world.

Sometimes, obviously, the above advice is impractical. When dealing with complex algorithmic issues, which already have pretty sound support in various unmanaged C++ toolkits and frameworks, you are forced to choose what exists over reinventing the wheel.

C++/CLI can bridge the gap.
The common practice is to wrap the unmanaged libraries with the C++/CLI, thus exposing the unmanaged functionality through the managed C++ layer. In the example below, Taxes project contains various algorithms for tax calculations written in (unmanaged) C++. The Taxes project is wrapped with TaxesMixed project which is a C++/CLI project. Finally, TaxesApp project, which is written in C#, uses the TaxesMixed wrapper to utilize the complex unmanaged tax calculations.


Tipical Solution

When dealing with unmanaged, mixed (C++/CLI) and managed projects, there are some notes to keep in mind:

Debugging

If you want to debug the unmanaged code (e.g.Taxes above) from the managed (e.g. TaxesApp above), don't forget to mark the option enable unmanaged code debugging on the debug tab of the managed project's settings.


Enable unmanaged debugging tab

Implementing Destructor/Finalizer

C++/CLI classes that use unmanaged C++ classes should implement Dispose\Finalize pattern (look for: "Deterministic finalization template" title). Apparently, implementing the pattern is much easier using C++/CLI than C#. All you need, is to provide a deterministic destructor (~ syntax) and a finalize method (! syntax) and everything else will be generated by the C++/CLI.


TaxesMixed::TaxCalculatorWrapper::~TaxCalculatorWrapper()
{
this->!TaxCalculatorWrapper();
}

TaxesMixed::TaxCalculatorWrapper::!TaxCalculatorWrapper()
{
if(nullptr != m_Calculator)
{
delete m_Calculator;
}
}
C++/CLI will generate all the additional Dispose, Dispose(bool) and Finalize methods in order to complete the pattern.


Here is a quick peek into the Dispose(bool) method, using Reflector.
As you can see, the method calls the deterministic Dispose method (~) when the TaxWrapper.Dispose() method is called and calls the Finalize method (!) when the TaxWrapper is garbage collected.



To pin or not to pin?

When a pointer to a managed object (or its part) needs to be passed to the unmanaged code, there is one important issue to be aware of:
Managed objects (on CLR Small Object Heap) don't remain at the same location for their lifetime as they are moved during the GC heap compaction cycles.
As a consequence,a native pointer that points to a CLI object becomes garbage once the object has been relocated. In order to ensure a correct behavior, C++/CLI introduces a pin_ptr pointer which pins a CLI object in order to prevent its relocation while a garbage collection cycle occurs.

(If you need a pointer semantic in C++/CLI you can use an interior_ptr pointer which is updated automatically when a garbage collection cycle occurs and remains a valid pointer even after the compaction.)

You declare a pin_ptr as follows:

pin_ptr<type> = &initializer;
Example:

public class Person
{
public string Id {get; protected set;}
public double[] Taxes { get; protected set;}

public Person()
{
Id = "SSN:12345"
Taxes = taxesDal.Load(this.Id);
}
}

void TaxesMixed::TaxCalculatorWrapper::CalculateAnnualTaxes(Person^ person)
{
//Pinning a whole object.
pin_ptr<Person^> ptr = &person;
//Passing the object or one of its value members to a native code.
. . .

//Pinning an array (by pinning the first element).
pin_ptr<double> arrPtr = &person->Taxes[0];
//Assigning to a native pointer
double* arr = arrPtr;
. . .
}
When dealing with pin_ptr you should remember:

  • Pinning a sub-object defined in a managed object has the effect of pinning the entire object.
  • Pinning an object also pins its value fields (that is, fields of primitive or value type). However, fields declared by tracking handle are not pinned.
    (To avoid redundant mistakes, my suggestion is to always pin the required reference type member, thus pinning the whole object.)
  • A pinning pointer can point to a reference handle, value type or boxed type handle, member of a managed type or to an element of a managed array. It cannot point to a reference type.
In any case, if you are interested in reading more about C++/CLI, you can find here very good tutorials on the topic.

Saturday, September 5, 2009

Refactoring Tools Review - Part I

Cross post from IRefactor

Don Roberts and John Brant stated in the book Refactoring - Improving the Design of Existing Code:
"Refactoring with automated tool support feels different from manual refactoring".
Indeed - It is!
Having an automated tool that helps you to change the code without the fear of breaking it - is invaluable.

That's why, I wanted to summarize several available options for .NET developers.

Let's start with the obvious one: Visual Studio Refactoring Tool.
As usual Microsoft concentrates on the core business, leaving the field to other players.

Visual Studio Refactoring Tool

Visual Studio comes with a very simple Refactoring Tool, available from the Refactor
toolbar or from the Refactor context menu.



As you can see, all the refactoring steps have keyboard shortcuts.
In addition, Visual Studio triggers some of the aforementioned refactoring steps behind the scene, when a certain change is detected.
Those changes are mostly underlined with a double red line, as here:
Using the combination Shift-Alt-F10 will fire the correct refactoring step menu, allowing smooth and quick refactoring.
(In the case above - Rename refactoring step).

The Refactoring Tool provides pretty basic refactring steps:
  • Rename -
    The tool effectively changes the names (variables, methods, types and etc...) across projects in the same solution.
    The changes can be easily reviewed (checked/unchecked) prior the modification through the Preview Changes menu.

  • Extract Method -
    Visual Studio provides a basic method extraction mechanism. Extract Method generates a new method with correctly inferred parameters (from the extracted scope) and substitutes the extracted block code with a call to the method. In addition, Extract Method identifies whether a method uses local class members and if not, suggests marking the extracted method as static.
    In the Refactoring process this often indicates that the method should Move to a different class, but Visual Studio Refactoring Tool doesn't provide any suggestions to where such a move is possible.
    (There are other tools that do provide the suggestions - patience, patience...).
  • Encapsulate Field -
    Creates properties for existing members.
  • Extract Interface -
    Enables extraction of an interface from a type.
  • Promote Local Variables to Parameter -
    Promotes a local variable to a method's parameter.
    When a local variable declaration isn't selected correctly, the tool will prompt with an explanatory menu (first screenshot).
    Also, the tool will alert when it cannot guarantee that all the callers pass a legal value to the newly promoted parameter (second screenshot).


  • Remove Parameters/Reorder Parameters -
    Adjusts a method's parameters.
Clearly, as stated above, this is a very simplistic refactoring tool, especially when comparing to Eclipse. As you can see below, Eclipse comes with much more refactoring steps available out of the box.
There are more than 20 refactoring steps in the Refactor menu.

In addition, one can utilize additional Visual Studio "Refactoring" features.

Visual Studio Additional "Refactoring" Features
  • Source Code Length -


    Adding Guides to the Vusial Studio IDE will allow to visually emphasis the length of the source code line.
    In order to add Guides, you should edit the "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\X.X\Text Editor" key in the registry (where X.X is the VS version).
    Create a string key called "Guides" with the value "RGB(255,0,0), 80" in order to have a red line at column 80 in the text editor.
  • Code Definition -
    Using the Go To Definition each time you want to examine the referenced method or type is very tedious. That's why the Code Definition Window is handy! As you move the insertion point in the editor, or change the selection in Class View, the Object Browser, or Call Browser, the content of the Code Definition window is updated and displays the definition of a symbol referenced by the selection.

  • Code Metrics -
    Knowing the code complexity can help writing a more clean and refactored code. Visual Studio provides the Code Metrics Window that provides a better insight into the developed code. I would suggest to pay attention especially on the following metrics:
    1. Cyclomatic Complexity - Measures the structural complexity of the code.
      A program that has complex control flow (and thus a high cyclomatic score) will require more unit tests to achieve good code coverage and will be less maintainable.
    2. Class Coupling - Measures the coupling to unique classes through parameters, local variables, return types, method calls and etc...
      High coupling (and thus a high class coupling score) indicates a design that is difficult to reuse and maintain because of its many interdependencies on other types.



Here is the combined IDE view with all the additional "refactoring" features:



Of course, an experienced Software Engineer can create a clean and refactored code, even with the skim Visual Studio Refactoring Tool and additional "Refactroing" features. The experience though isn't going to be smooth and easy. The refactoring features are scattered and as overall not so user friendly.

Summary:
  • The Visual Studio Refactoring Tool is rather simplistic.
  • Most used refactoring steps are: Rename and Extract Method. Those steps provide fairly good functionality (though extra behavior is possible when extracting a method).
  • Other significant refactoring steps don't exist; Microsoft leaves the stage to other players.
  • Additional refactoring features, as: Source Code Length, Code Definition and Code Metrics exist and provide complementary ways to understand and refine your code. The features, though, scattered across different places which makes it hard to simultaneously work with all of them.
In the next post we will examine additional automatic refactoring tools.

Thursday, August 20, 2009

Israeli Developers Community Conference 2009 - Voting is Open

The Israeli Developers Community is going to meet in order to share and learn from each other.
I have submitted two sessions:
Refactoring & Design - The session will demonstrate how to identify "bad smells" inside the
code and how to tackle and refactor those smells using Refactoring
steps and Design Patterns. For more information you are welcome to listen to a short podcast I did on that subject.
Crawling & Parsing the Web - The session will outline: Parsers Methodologies: DOM, Streams, Regexes;
Open source solutions; Crawling Policies: selection, revisit,
politeness; Web-traps and Distributed Architecture. The session will be held together with the representatives of my amazing R&D team.

I will be more than happy, if you vote for my sessions!

When?
Monday, 14th September 2009
08:30 - 17:30
Where?
Microsoft ILDC,
13 Shenkar st., Herzeliya, Israel

Wednesday, July 29, 2009

Refactoring & Design Podcast

Here is a short podcast I participated in; the subject is Refactoring & Design.

Many thanks to Ran & Ori !!!



Sunday, July 26, 2009

One pattern to rule them all

(Cross post from IRefactor)

Well, there is none...
Simply to put, there is no silver bullet.

Yet, while designing an application, there are several well known extremes:

"Heavy weighted design" - Such a design will use almost each and every pattern described in the bible of "Design Patterns" - by the Gang of Four.

However, squeezing as much as possible patterns into your design has a bad smell. Here is what Erich Gamma has to say on the matter:
"Trying to use all the patterns is a bad thing, because you will end up with synthetic designs-speculative designs that have "flexibility" that no one needs. These days software is too complex. We can't afford speculating what else it should do. We need to focus on what it actually needs. That's why I like refactoring to patterns. People should learn that when they have a particular kind of problem or code smell, as people call it these days, they can go to their patterns toolbox to find a solution.
"Big ball of mud" or "no design" at all - Though the application utilizes classes (objects) and methods, it does it solely due to the language constraints. After all, the code must be placed somewhere and usually it lies in giant classes and methods. Besides that, there are no abstraction layers or small and specialized objects that have "concise conversations" in order to solve the business requirements.


To use the patterns correctly, a Software Engineer must understand the context (=requirements) and the suggested solutions depicted by the patterns. Usually, in the real life scenario, it's hard to grasp where exactly is the right context to which a suggested design solution should be applicable. Especially for the inexperienced Software Engineers, design patterns described by the Gang of Four's bible can confuse and lead to:
  • Misidentify contexts and thus creation of "Big ball of mud" or "no design" applications.
  • Over identify contexts (even where there is no context at all) and thus creation of "Heavy weighted design" applications.
That's where the Expert pattern can be handy! Expert pattern is one of the General Responsibility Assignment Software Patterns. In essence it's a very basic and straightforward pattern that helps to pinpoint what is the main responsibility of a method or a class. Just ask yourself: "What is the real responsibility of the method/class? Is it really this method's/class's responsibility to do that operation?". By answering those questions it's possible to divide a system effectively to a lot of small, but cohesive objects, that really need to collaborate together in order to solve your business requirements (which is what Object Oriented Design is really about).

So it comes, that though there is no "pattern to rule them all", using the Expert pattern/principle effectively, will allow you to rule your application.

Sunday, July 12, 2009

Asymmetric Accessor Accessibility and Automatic Properties

(Cross post from IRefactor)

Asymmetric Accessor Accessibility – is a feature that was introduced in .NET 2.0 in order to allow different accessibility levels to get and set portions of a property or an indexer. Those get and set portions are called accessors.

In the example below, the get accessor is public, whereas the set accessor is restricted and private.

private string id;
//...
public string Id
{
get { return id; }
private set { id = value; }
}

Automatic Properties – is a syntactic sugar feature, introduced in .NET 3.5 to allow more concise property declaration.

public string Id {get; set;}

The above features, can be combined together to form a very elegant and concise property declaration, with asymmetric accessibility levels:

public string Id {get; private set;}

Using “automatic asymmetric accessibility properties” contributes to the clarity, elegance and correctness of the code.

One of such examples is immutable class implementation.

Friday, June 12, 2009

A Cult Programmer

Cross post from IRefactor

Last week, while conducting interviews for a Senior Software Engineer position, a candidate asked me a “red alert” question. A few moments after starting the interview and explaining the position, the candidate squeezed the following:

“What is the current .NET framework you are using and are you planning to move to .NET 4.0?”
I bet you wonder… Is it really a “red alert” question?

Allow me to elaborate. The candidate’s real motivation was to percept how technological is the company that interviews him. If the company is stuck in .NET 1.X or it isn’t planning to move forward with Microsoft’s future plans, it just not technological enough.

What alerts me is the idea that the specific version of framework used by the software is a measure to the software’s quality and not using a specific technology necessarily means something bad. Maybe the company utilizes the best practices of Software Development by applying: Analysis, Architecture & Design, Automatic Unit Testing, Static Code Analysis, Code Coverage, Integration Tests, Automatic Tests, Automatic Builds, Code Reviews, Peer Programming and etc… Maybe the company stands for writing quality software by applying the Object Oriented Principles like GRASP (loose coupling / high cohesion), Design Patterns, SOLID and etc…
Maybe somebody forgot that a good quality software doesn’t mean necessary using the dynamic keyword?

Allow me to emphasis. Technology is important! Choosing the right technology for the specific requirements of your application is important!
However, a technology is not the key factor in the software’s success.
It reminds me a good article discussing the technology leaps, by Joel Spolsky. Jumping from technology to technology seems to be just a plain “Fire and Motion”.

If you are asked whether you are planning to move towards .NET X.X, just ask the candidate to explain, why (or what) in his opinion moving to .NET X.X will contribute to your application. Most of the time, as in my case, the answer will be quit generic: “It’s just a better technology”. This clearly, as explained above, doesn’t stand! Such a candidate is being marked often as a cult programmer. A Cult Programmer is a programmer who seems to compensate sound Software Engineering skills solely with a specific technology evolution.

Saturday, May 30, 2009

Separate Domain from Presentation – part III

Cross post from IRefactor

This is a third post in the series of posts about “Separate Domain from Presentation” Refactoring.

Previous Posts:
Separate Domain from Presentation – part I
Separate Domain from Presentation – part II

Last time we explained how to refactor towards MVP – Supervising Controller pattern.
We left our project in the following state:
In this post I will complete the required refactoring steps and will suggest more steps to even deepen the separation of UI and BL concerns.

Refactoring Steps:
  • "Extract Interface" – in order to have a better encapsulation and separation of concerns, the CoursesPresenter shouldn’t access CoursesView directly. After all, the only functionality of interest to the CoursesPresenter is the CoursesView Courses property. Therefore, we will extract an interface from CoursesView class as follows: right click on the CoursesView class » Refactor » Extract Interface and select Courses property as shown in the figure below.
  • Compile the Solution and execute the Unit Tests.
  • In the CoursesPresenter class change all the occurrences of CoursesView to ICoursesView.
  • Compile the Solution and execute the Unit Tests.
  • Last time we indicated that the presenter should handle complicated user events by subscribing to the view. After introducing the ICoursesView interface it’s simple. Add to the interface the following code:
event Action LoadCourses;
event Action SaveCourses;
  • Implement the newly added events in the CoursesView class:
public event Action LoadCourses;
public event Action SaveCourses;
  • In the CoursesPresenter class rename the Load and Save methods to LoadCoursesEventHandler and SaveCoursesEventHandler respectively. Use right click » Refactor » Rename tool to rename it easily.
  • Wire-up the events in the CoursesPresenter constructor as follows:
public CoursesPresenter(ICoursesView view)
{
this.view = view;
view.LoadCourses += LoadCoursesEventHandler;
view.SaveCourses += SaveCoursesEventHandler;
}

  • Compile the Solution and execute the Unit Tests.
  • In the CoursesView class add the notification code:
private void NotifyObservers(Delegate del)
{
Delegate[] observers = del.GetInvocationList();
foreach (Delegate observer in observers)
{
try
{
Action action = observer as Action;
if (action != null)
{
action.DynamicInvoke();
}
}
catch
{
// graceful degradation.
}
}
}
  • Change the CoursesView.Load and CoursesView.Save methods to call NotifyObservers respectively:
private void FrmMain_Load(object sender, EventArgs e)
{
//...
NotifyObservers(LoadCourses);
//...
}
private void Save()
{
//...
NotifyObservers(SaveCourses);
//...
}
  • Compile the Solution and execute the Unit Tests.
  • Now it is the time to remove all the temporary instantiations of the CoursesPresenter class in the Load and Save methods. Remove all the occurrences.
  • In the Program.cs class instead of Application.Run(new CoursesView()) write the following:
static void Main()
{
//...
CoursesView coursesView = new CoursesView();
CoursesPresenter coursesPresenter = new CoursesPresenter(coursesView);
Application.Run(coursesView);
}


This concludes the “Separate Domain from Presentation” refactoring.
We ended with the following:
For next possible steps, consider the following:
  • Go over the CoursesView.Designer.cs and remove all the TableAdapter instances.
  • Create DAL and move Save and Load methods further more, from the presenter to the DAL.
  • Create the CoursesView and CoursesPresenter using Abstract Factory or using Dependency Injection.

Thursday, May 28, 2009

Var{i-able;}

Cross post from IRefactor

Here is a scoop; The good software engineer is lazy!
You don’t believe me? Then ask yourself this: If a good software engineer was not lazy why would he:
  • Automate processes?
  • Reuse a function instead of duplicating its code?
  • Explicitly name a function for its behavior instead of naming a function F1 and providing a non descriptive (and possibly long) documentation?
Yet, here is another scoop; The bad software engineer is lazy too!
While this statement clearly isn’t a shock to you, it immediately pop-ups the question:
What is the difference?
The difference is that a good software engineer is lazy in a constructive way (which allows to build reusable software and automated processes) while a bad software engineer is lazy in a destructive way (which destroys any chance for a reusable software and dooms you for long hours of struggling to understand and fix the bad code).

And here is a short example:
The var keyword allows implicitly typed variable declaration.
To tell the truth, it doesn’t allow to be a bad lazy software engineer!
Why bad? Take a look at the code below:

var database = DatabaseFactory.CreateDatabase();

What is the meaning (meaning = type) of the database object in this context? Do I really need to guess that? Does somebody expect me to go to its definition to find out?

//...
foreach(var observingStore in wareHouse.Stores)
{
//...
}
//...

What is the meaning of the observingStore object in this context? Was it named observingStore due to its implementation of the Observer pattern (which implements IObserver), or is it just an object name of a Store, ObserverStore or even ArgicultureStore class type?

Remember, you want to be lazy in a constructive way; You want to read those lines of code without wondering. You want to immediately grasp the meaning (types) of the objects you are dealing with, without switching the context and jumping to a different location just to refresh your memory.

The var keyword was introduced for one and one purpose only; To allow usage of anonymous types. Therefore, this is the only place you should use it! Unless you are a bad lazy software engineer (which clearly is not the case :) ) you will follow the rule!

Monday, May 25, 2009

Scientist Office Grant

The Scientist Office provides support to Israeli hi-tech companies in the field of innovative research and development (R&D). A month ago we applied for such a support and I was advised (thanks to Avi) to summarize the experience for those who might be interested in the future.

Here is a short list of Don’ts and Dos from the technology perspective.

DON’Ts and DOs:
  • Don’t use external consultants:

    • All the information is freely available here.
    • The application forms are easy to follow and easy to understand.
    • Using an external consultant only adds complexity, sometimes due to his eagerness to “contribute” to the process.

  • Don’t write a lengthy application; Summarize your technology scantily, mainly describing the following:

    • What are the key points of the current technology (if exists any).
    • Why the current technology (or current market’s technology) doesn’t work for the required problem.
    • How your technology innovative development will address the required problem. Here you can broaden your key points and explain in more details what the technology steps you will apply. Any algorithmic or technology innovative solutions must be described in a high to mid level (All the Scientist Office’s inspectors should keep confidentiality).

  • Do provide a Block Diagram (possibly Component Diagram in UML) that describes in a high level the architecture of your future product.

    • Provide a short description for each component and emphasize the innovation in each component if exists.

  • Do provide R&D Tasks break down as follows:

    • Don’t break down the tasks to Design, QA or DB support. It’s obscure. Incorporate those inside the tasks themselves.
      For example:


      Don’t

      Do
      Task A = 6.5 months (resource time)

      • Design=0.5M
      • Dev=4M
      • DB=1M
      • QA=1M
      Task A = 6.5 months (resource time)

    • If you have a task that spans over 2 years (2-year application), divide the task by meaningful milestones.
      For example:


      Application

      Don’t

      Do

      Year 1
      Visual Studio Integration IVisual Studio Toolbar Integration

      Year 2
      Visual Studio Integration IIVisual Studio Property Plug-In Integration

  • Do a realistic estimation of the development tools and licenses:

    • Chief Scientist supports only development; don’t provide any production costs.
    • Chief Scientist supports one computer per R&D team member.
    • Chief Scientist encourages the use of local sub-contractors during the development phase.

  • Do be prepared to stand by your application:

    • To explain why your technology is needed.
    • To justify the feasibility of your solution; Show you have enough resources (or that you plan on hiring them) and demonstrate any technological proof of concept.
    • To justify any development cost or use of a sub-contractor.

I hope that this short list helps you, but if you still have questions you are most welcome to write me and I’ll gladly try to assist.

Monday, May 18, 2009

Separate Domain from Presentation – part II

Cross post from IRefactor

This is a second post in the series of posts about “Separate Domain from Presentation” Refactoring.

Previous Posts:
Separate Domain from Presentation – part I

Last time we discussed the ways to disconnect the Presentation (FrmMain) from the Domain Object (CoursesDS) in the IRefactor.CoursesView project. As a consequence, instead of the bloated all-in-one initial project we ended with the following:
  • IRefactor.CoursesView – represents the View (CoursesView) without the domain object.
  • IRefactor.Common – represents the domain object (CoursesDS) without the UI elements.
It’s time to continue the UI and BL separations further more. For that purpose I will use the MVP pattern. It seems that there are a lot of misunderstandings regarding definitions of the UI/BL separation patterns (take a look here), I will focus on the following definitions:

In my post, MVP = Model-View-Presenter will basically stand for:
  • Model – Hey, I am the domain model;
    I know how to manipulate model objects in order to perform required application logic and persistency.
    I don’t know how to visualize to the user any information or how to respond to any action the user may take.
  • View – Hey, I am the view model;
    I know how to visually represent to the user information (that will be provided for me by the Presenter).
    I know how to perform simple data binding and possible simple UI actions that modify the visual layout of the screen.
    I don’t know what to do when an application logic or persistency are required.
  • Presenter – Hey, I am the presenter model;
    I know how to handle user requests to the View (which are more complicated than a simple data binding) and how to delegate those requests to the Model.
    I know how to query the Model in order to delegate information to the View, if any should be displayed to the user.
    I don’t know how to draw widgets graphically (that’s View’s concern) and I don’t know how to perform any application logic in order to derive that information (that’s Model’s concern).
Those who have sharp eyes, will probably spot here the use of a Supervising Controller (that doesn’t introduce right away drastic changes to the code in the spirit of Refactoring. Later on, one could turn the View into the Passive View while continuing to refactor).

MVP - Supervising Controller

Refactoring Steps:
  • Rename the IRefactor.CoursesView.FrmMain class to CoursesView. Go to FrmMain, right click on it and use Refactor » Rename command to rename it easily.
  • Create a class CoursesPresenter in IRefactor.CoursesView.
SeparateDomainFromPresentation.CoursesPresenter
  • Add to CoursesPresenter class a pointer to the CoursesView (pay attention to the fact that the view is readonly)
public class CoursesPresenter
{
private readonly CoursesView view;
}
  • Add to CoursesPresenter a constructor that receives CoursesView instance.
public CoursesPresenter(CoursesView view)
{
this.view = view;
}
  • Compile the Solution and execute the Unit Tests.
  • Now we need to delegate user’s interactions from the view to the presenter. We can do it rather by inheriting the EventArgs and creating a CoursesEventArgs class, or we can let the CoursesPresenter query directly the CoursesView and grab the required data. Here I’ll grab the CoursesDS domain object directly. Add to the CoursesView the following:
public CoursesDS Courses
{
get { return coursesDS; }
}
  • Let’s start with the Save event delegation. If you look closely at the coursesBindingNavigatorSaveItem_Click event handler, you will notice that the method has two different responsibilities. It treats the required data binding and then performs data access operation in order to save the CoursesDS domain object. To separate the concerns, let's use another Refactoring step called “Extract Method”. Select the data access code, right click on it and use Refactor » Extract Method command to extract the code into a new method called “Save”.
SeparateDomainFromPresentation.SaveItem
// ...
this.Validate();
this.bsCourses.EndEdit();
// Changed from auto generated code.
Save();
// ...
private void Save()
{
if (coursesDS.HasChanges())
{
CoursesDS.CoursesDataTable changes =
this.coursesDS.Courses.GetChanges() as
CoursesDS.CoursesDataTable;
if (changes != null)
{
taCourses.Update(changes);
}
}
}
  • Compile the Solution and execute the Unit Tests.
  • After breaking the coursesBindingNavigatorSaveItem_Click method we suddenly realize that the Save method doesn’t belong to the CoursesView class as it does a data access operation. By all means this operation should be inside the domain model (business logic). In the meanwhile we will push the method inside the presenter.
  • In CoursesPresenter create a new method called Save. The method will retrieve the CoursesDS domain object from the CoursesView and save the object into the DB.
public void Save()
{
CoursesDS coursesDS = view.Courses;
//...
}
  • Compile the Solution and execute the Unit Tests.
  • Copy all the code from the CoursesView.Save method into the CoursesPresenter.Save method and adjust the new code to its new “place” (pay attention to the CoursesTableAdapter that needs to be redefined).
public void Save()
{
CoursesDS coursesDS = view.Courses;
if (coursesDS.HasChanges())
{
CoursesDS.CoursesDataTable changes =
coursesDS.Courses.GetChanges()
as CoursesDS.CoursesDataTable;
if (changes != null)
{
using (CoursesTableAdapter taCourses = new CoursesTableAdapter())
{
taCourses.Update(changes);
}
}
}
}
  • Compile the Solution.
  • Now, for the fun part; Put all the code within the CoursesView.Save method inside remarks and declare CoursesPresenter object that calls to its Save method.
private void Save()
{
//if (coursesDS.HasChanges())
//{
// CoursesDS.CoursesDataTable changes =
// this.coursesDS.Courses.GetChanges()
// as CoursesDS.CoursesDataTable;
// if (changes != null)
// {
// taCourses.Update(changes);
// }
//}
CoursesPresenter presenter = new CoursesPresenter(this);
presenter.Save();
}
  • Compile the Solution and execute the Unit Tests.
  • Walla! You have successfully moved a data access method from the view to the presenter. With continuous refactoring you can push that method even further in the data access layer.
A quick summary:

  • We introduced a new presenter class, called CoursesPresenter.
  • We moved the Save method (which does a data access operation) from the view into the presenter class. (Don’t worry we will eliminate the Save method from the CoursesView in the next post.)
  • The same should be applied to the Load method (FrmMain_Load). I won’t show it here, just use the same principle.
Here is the schematic view of the current IRefactor.CoursesView project.

SeparateDomainFromPresentation.MVP
Clearly, it’s not the same as the MVP pattern we depicted earlier. Future postings will explain additional steps to refactor towards the MVP pattern, by applying:
  • Events – the presenter handles complicated user events by subscribing to the view.
  • Interfaces – the presenter should manipulate the view only via an interface.Failing to do so, will break the view’s encapsulation.
References:
Take a look here for a good summary on Smart Client development.

Wednesday, April 29, 2009

Shooting without Aiming

Cross post from IRefactor

During my lectures on “Refactoring & Design” I am frequently amazed to hear the following ideas:
“Doesn’t Design contradict Agile? Isn’t Agile about gaining speed, while Design about gaining bureaucracy?” - some ask.
“Why bother with Design? Eventually It’s impossible to do any design when using Agile methods, like TDD” - others continue.

I often compare the aforementioned questions to “Shooting without Aiming”. It won’t occur to shoot at a target without first aiming at it. The rule is true, especially when the time is short and every shot counts; You will spend an extra minute just to aim it well!

Shooting after Aiming is “life saving”!
So, How does it even come to any mind to code without thinking(thinking=Design)?

Please, don’t take my word on it. See what Robert C. Martin has to say about The Scatology of Agile Architecture. Robert C. Martin knows a thing or two about Agile Development. After all, he was the originator of the meeting that led up to The Agile Manifesto.

Friday, April 24, 2009

Dude, I blew up the Demo!

Cross post from IRefactor

I am sure, we are ALL familiar with the situation:

Morning… The sun is shining, the birds are chirping… You are sitting in front of your computer, sipping a delicious cup of coffee. Then, in the corner of your eye, you spot movement; your VP Marketing approaches you with a big smile on his face:
“Jonathan”, he says, “Just got important news, we have a big opportunity! We have been requested to demonstrate our super complex web analysis capabilities to a huge potential client. Could we build a quick demo?”
You sigh and start coding…

You don't sleep neither eat; you copy and paste; you code; you build and execute and after stressful (but, yes, enjoyable) five days you provide a top-notch Demo.

The new “toy” becomes the hottest news in the office.
It’s cool, it's fast, it's colorful and it demonstrates an innovative functionality and thinking!

The VP Marketing is in heaven; he presents the demo to the client and gets an enthusiastic response.

“Jonathan”, his eyes are gleaming, “They are excited! They just need a small feature - export to excel, to evaluate it a little bit more.”
You return to the operating table and add some “quick & dirty” code to export the analysis to excel.
“Jonathan”, your VP Marketing, “Great work! Could you also add a small feature of notification by email?”
A few days later…
“Jonathan, Well done!, Could you also add…”
A year later, you find yourself maintaining the demo and cursing that cheerful morning you had agreed to develop the goddamned application!

If you ask your VP Marketing what happened, he will honestly say: “Dude, I blew up the demo!” Remember the Honey, I blew up the kid!? Your VP Marketing, "accidentally", blew up your five days old “child” into a giant “monster”.

It’s common for a small-mid size companies to turn their “demo” applications into production ones. The “time to market” is crucial; once the demo was introduced successfully, the features are added patches over patches resulting in a House of Cards AntiPattern.

House of Cards AntiPattern – a continuous patch (card) over patch (card) which is done in order to correct a bug or to add a focused feature without design or refactoring considerations.

Even in a small demo application, there is a place for a careful examination of the developed features. You can make assumptions; you can speed-up the UI development, but you need to design the core features, as you develop the production code:

  • Separate the Domain from the Presentation, using MVP pattern for example.
  • Use Façades to shadow the BL.
  • Provide a well organized BL. (Don’t try to address all the future possibilities and requirements. Just provide good object oriented basis).
  • Don’t duplicate code, don’t provide lengthy and hard to read methods.
  • Provide Unit Tests and test the BL as much as possible.
  • Deal with “considerable”exceptional situations (It’s OK to decide not to deal with uncommon demo scenarios).
Notice, it is perfectly fine to make assumptions and to apply some limitations during the demo development. However, building the demo correctly will ease the move to the production, even if a certain feature is redesigned or redeveloped.
Undoubtedly, it will ease the development of any additional “demo” features for more potential clients.

As for the managers, try to remember; unless you want to make excuses for unmaintainable and hard to change code, you need to allow your development teams to work a little bit longer, just to produce a better demo!