Agile Software is software structured so that it takes relatively less time to change.
Hmmm… This means rather than structuring the software solely to produce the required features and functionality, that instead we purposefully create software whose structure results in changes to the software taking significantly less time, while of course providing the required Features and Functionality. Agile Software might take 10% or 40% or ?% less time to change to produce the same new features as Non-Agile Software takes. Exactly how much less time it takes to change Agile Software depends on the state of the code base it’s being compared with. For the potential time savings Agile Software offers over the life of a code base please see my blog article “Software Structure Can Reduce Costs and Time-to-Market”. This article is based on empirical data. And yes, you can get double digit improvements in this area, and maybe more.
In order to see where Agile Software fits in to the entire picture, here are the general kinds of Business Value created by a Software Product. And by Software Product I mean a software system that is either sold in the market, or used internally by a business or other organization. This includes desktop software, web apps, web sites, and other kinds of software as well.
Kinds of Business Value Created by a Software Product
- A Software Product itself provides Features and Functionality valued by a business, either for sale or internal-use.
- The code base of a Software Product is modifiable over time, i.e. software is soft. This facilities the production of additional Features and Functionality from an existing code base that are of value to a business in the future. Therefore, a code base which has a significantly lower time cost of changing code to add new features and fix bugs, has a significantly higher Business Value.
The Agile Process that arose around 2001 primarily addresses the first kind of Business Value. The overriding goal of the original Agile Manifesto, and the bulk of the Agile literature since, is to produce software that satisfies user wants and needs now. This is opposed to the typical Waterfall result where the software produced satisfied the requirements written 1,2, 3, or more years earlier, at the start of the development process. This goal is stated over and over in Agile literature, and is considered to be primary Business Value produced by Agile Processes. However, the term Business Value it is not well defined in Agile literature as Scott Ambler points out in his excellent article “Agile at 10: What we Believe”. Please see the summary section at the end of this article titled “The Elephants in the Room”.
As such, the Agile Movement and Agile Processes have not yet focused upon the second kind of Business Value – A significant relatively lower cost of changing code to add new features and fix bugs. This is the realm of Agile Software, a truly amazing source of superior Business Value.
In order to get a better understanding of exactly what Business Value is created by Agile Software, let’s examine this in more depth while also looking at some basic measures of Business Value.
Details of the Business Value Created by a Software Product and Measures of It
- The Revenue a software product generates over its life creates Business Value. Or, for an internal-use software product Business Value is created by enabling a business to carry out its operations (often a source of cost savings, although these days it may be an absolute requirement do get something done that could not be done without software). This is essentially created by the Features and Functionality of the software. It creates significant short term, long term, and future Business Value. Higher Revenues or higher Cost Savings create higher Business Value.
- The Time-To-Market (TTM) of a Software Product creates significant future Business Value. A lower TTM creates higher Business Value. Delivering 4 new software releases per year will probably generate a lot more sales than delivering a single new release each year.
- The Total-Cost-of-Ownership (TCO) of a Software Product creates short and long term Business Value. A lower TCO creates higher Business Value. It’s kind of like not having to pay for servicing your car so often.
- The Return-on-Investment (ROI) of a Software Product creates short and long term Business Value. A higher ROI creates higher Business Value. It’s kind of like getting a higher rate of return from your financial investments.
- Business Agility and Competitive Agility are possibly the most important Business Value created by a Software Product — They are critical to a business’s survival and success in a highly competitive environment. And they are heavily dependent on Time-to-Market to deliver a winning set of Features and Functionality at the right time. Unfortunately they are not as easily measurable as the above items. Being more competitive creates a higher Business Value.
Please notice that Agile Software creates Business Value in each of the above 5 areas! TTM decreases, which in turn has the potential to increase Revenue since there can be more new versions released each year AND at the same time increase Business and Competitive Agility. Since it takes less time to produce a release, the cost of producing a release will likely decline and thus decrease TCO. ROI increases with increased Revenue or decreased TOC, or both. And, besides all this goodness that one gets with Agile Software + an Agile Process (keeping the feature set focused on the current wants and needs of customers), the business has an excellent chance to become a true niche leader. And niche leadership is worth a whole lot, in part by further increasing Revenue and ROI, and also by not getting driven out of business by the competition. Agile Software sets off a virtuous set of interactions, indeed!
I’ll not bore you with tallying up the detailed Business Value produced by a sole focus on Agile Processes, disregarding the role that software structure plays. Clearly agility in the software development process is a necessary, but not sufficient, condition for sustained software business success. I encourage you to consider how you and your organization can begin to reap the notably superior value created by Agile Software.
Please stay tuned for more articles on this topic.
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
This month, January 2014, is the 2nd anniversary of my software development blog. Over the past 2 years I’ve posted a lot of articles, and learned a lot about technology, writing and blogging. My goal is to post a “hopefully helpful” article once a month, which I’ve mostly met.
The reasons I blog:
* I read lots of blogs, especially when learning new technologies or solving problems. It is amazing how much “googling” various topics has changed the way we do software engineering. So in order to “give back” a little to the software development webosphere I put in my 2 cents worth each month.
* I like writing, and blogging gives me chance to do it, and to refine my writing skills.
* And last, but not least, I blog for professional identity — Here is what George the Senior Software Engineer cares enough to write about, spending a non-trivial amount of time on it each month. It usually takes me 2 to 4 hours to write an article, edit it, and post it on WordPress, not including the time it takes to develop the code that the article is about. However, a few of my blogs have taken much, much longer.
The 2 most viewed articles in my blog are as follows:
Over the past 2 years the most popular article, by far, has been the one on “A WCF Proxy from Scratch:….” posted in June 2012. There is a great thirst for knowledge on this topic. It is usually near the top of the list of views each week, if not at the top.
The next most popular article was posted in January 2013 about INotifyDataErrorInfo, the user input data validation interface for Silverlight and WPF. It is often the top article each week. This interface was introduced into WPF in .NET 4.5 after having been in Silverlight for a year or more prior to .NET 4.5.
I mainly blog about things I think will be helpful and immediately understandable to a lot of developers, and sometimes which I’m involved in as the monthly blog deadline approaches.
I hope you’ve benefited from this blog as much as I have. Why not start your own software blog and share your knowledge with us?
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
Yes, that is exactly what I mean — Where, not what. This article seeks to get you thinking about your thinking. The goal is to give you a simple conceptual tool to aid you in focusing your thinking so you and your team can be more efficient and productive in doing software engineering.
Remember a time when you were so wrapped up in some small aspect of a software design or implementation that you completely missed something really big and important? Once realized, the typical response is “Doh! How could I have missed something so obvious?” The old adage “I couldn’t see the forest, for the trees” applies here. Or, I couldn’t see the big picture since I was completely absorbed in some small part of it. Given the complexity of software, this is not surprising. But it does take time. Especially if you did a bunch of design or implementation without considering the missing information, and then have to do it over again to account for what you missed. And when a team collectively does this, much, much more time is consumed.
So, here is the “thinking about thinking” technique that can prevent you from “not seeing the forest, for the trees”. There three “Levels of Thinking” that one can productively use in all phases of software development.
- Concept Level
- Interface Level
- Implementation Level
And these 3 levels are in a hierarchy, as shown above. Very briefly, Concept Level thinking results in black boxes, their definitions, plus their interrelations. Interface Level thinking results in complete terse descriptions the capabilities a black box offers the outside world. And Implementation Level thinking focuses on the grubby details of the code that implements the capabilities of a black box and its interfaces. Keep it simple.
In all the tasks you do in software engineering, you get to choose which level of thinking you are going to use in each particular task. I hope you are starting to think about your own thinking, here. Note that you may rely on habit or unconsciously seek to work within your “comfort zone”, and thus not consciously choose the level of thinking that you use. And for most software developers the Implementation Level of thinking is the most habitual and/or comfortable.
So what happens when we are doing Implementation Level Thinking without having done sufficient thinking about the concept we are trying to implement? Or without sufficient thinking about the interface(s) our implementation must satisfy? What happens is that it takes more time — Maybe a little — Maybe a lot. And when other people are involved even more time is spent, as a multiplier effect kicks in. Some of this is avoidable.
So asking yourself “Where are you thinking?” from time to time can have a good payoff. You have to be honest. Are you doing a lot of Implementation Level thinking in a Requirements meeting? Or (even more of a time sink) has the entire conversation of a Requirements meeting left the Concept level where it belongs and landed in the Implementation Level? Having Levels of Thinking as part of the “Team Vocabulary” can save a lot of time and keep meetings (requirements, design, etc.) on track. “Wait a minute! Where are we thinking?” can turn this situation around and refocus a Requirements meeting on Requirements and Constraints, rather than upon the Solutions and Implementation Details that Implementation Level thinking focuses upon.
When you are implementing code and things get difficult, do you need to bubble up and do some more work at the Interface Level or Concept Level before proceeding with the code? Learn to often ask “Where am I thinking?” and “Where do I need to be thinking to get a solid solution”. As an individual, that can keep you on track. One thing that stands out to me is how much easier and faster everything gets when I clearly understand the concept(s) I’m dealing with.
Also learn how to navigate between the 3 Levels of Thinking. For example, when doing the detailed design of a WCF Data Contract, learn to make a mental bookmark and bubble up to the Concept Level and take a look at things from that higher level perspective from time to time. Bubbling up to higher Levels of Thinking is generalizing or holistic thinking. One of its great advantages is that the high level perspective allows you to better see the synergistic opportunities in a system, that make the whole greater than the sum of its parts. Synergy can lead to significant increases in productivity. On the other hand, reductionist thinking drills downward, ultimately to the Implementation Level.
Once you learn to use this technique (and it may take some practice), you will find yourself able to look at your work from different perspectives at will, effortlessly navigating between the various Levels of Thinking. You will save significant time by reducing the amount of time you spend doing rework. And occasionally you will see synergistic situations that save lots, and lots of time and make things much easier.
Now you have a useful tool to aid you in thinking about your thinking in day to day software development. I’ve found it invaluable since I read about it in 2007 in a book or article by Martin Fowler. When I first read it, it really rang true, solidifying an intuition I’ve had for years. Thanks so much, Martin! I would include a proper citation but I’ve lost the source. Thankfully I’ve not lost the valuable idea.
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
This article explores 3 possible ways a WCF Real-Time Notification Service can utilize SignalR, while applying some of the SOLID principles. My focus is entirely on pushing data to clients, rather than the chat type applications popular with SignalR demos. The WCF Real-Time Notification Service has the Single Responsibility of pushing data to clients.
Wouldn’t it be useful to have a WCF Real-Time (RT) Notification Service? Then any other service in an SOA system can just call the WCF RT Notification Service via its Service Contract(s) and say “Push this data to the UI Clients listening for it”. So easy! With a WCF RT Notification Service using SignalR one can push data to all UI Clients – Web Clients written in HTML/JavaScript, WPF Clients, WP8 Clients, etc.
With an RT Notification Service available, Business Intelligence services can use it to push data to UI Clients exactly when server data changes, making a chart or graph change in client UIs. Long running business processes can use it to notify UI Clients monitoring the process when process steps are complete or when human attention is needed. I’m sure you’re aware of other UI Client use cases for a WCF RT Notification Service as well.
The 3 Ways a WCF RT Notification Service can use SignalR
- The WCF service uses the SignalR Persistent Connections feature to push data to clients.
- The WCF service uses one or more SignalR Hubs through a “direct reference” to the Hub on the server, calling the Hub’s “Client Methods” that invoke a method on the client as the means to push data. This is the simplest implementation of the 3 Ways.
- The WCF service uses one or more Hubs as a client of SignalR, connecting to the Hub(s) via SignalR’s HTTP endpoint “connection”. Then, through the HTTP connection as a client, the WCF service calls the Hub’s “Server Methods” as the means to push data. My thanks to Christian Weyer for mentioning this idea in his useful PluralSight course “Introducing ASP.NET SignalR – Push Services with Hubs”.
What? “Client Methods”, “Server Methods”? The terminology can be an obstacle. If you start getting lost, please do a quick scan of “Introduction to SignalR”: http://www.asp.net/signalr/overview/signalr-20/getting-started-with-signalr-20/introduction-to-signalr. Look at the “What Is SignalR” section in the first 2 pages, and the “Connections and Hubs” section near the end of the document.
Which of these 3 Ways is best? That depends on your requirements, the skill level of the developers working on the project now and in the future, and the relative importance in your organization of the tradeoff between reducing Total Cost of Ownership versus reducing Time to Market (often at odds with each other, but not always). The goal of this article is to provide you with, and point you toward, much of the information you need to make that decision. I have programmed the techniques described herein for the 2nd and 3rd Ways to ensure they work. However, I have not yet worked with Persistent Connections.
The 1st Way for WCF Services to Utilize SignalR — Persistent Connections
SignalR provides 2 APIs – The Persistent Connections API and the Hub API. The Persistent Connections API is a communication API that provides access to the low level SignalR communication protocol. This is a connection based protocol, using messages and dispatching. It will be familiar to developers knowledgeable in WCF which also uses messaging and dispatching.
As is the case when using any low level API, using Persistent Connections typically requires writing a lot more code than when using SignalR’s higher level Hub API. The Hubs API is a remote invocation model which uses (and also hides the details of) the low level Persistent Connections. All of the protocol negotiation required for the HTTP connection is automatically taken care of by Hubs. As is all of the packaging of data transferred, and unpacking it on the client and converting it to client types. With Persistent Connections the developer must write code to do much of this work themselves.
At the end of the document “Introduction to SignalR” Microsoft recommends that most apps use Hubs. It says Persistent Connections should be used in the following cases:
- It is required that the message format is specified by the developer, rather than by SignalR.
- Developers are more comfortable working with a messaging/dispatching model, than the remote invocation model provided by Hubs.
- An existing app that is implemented in a messaging model is being ported to SignalR.
However, keep in mind that messaging/dispatching is a very powerful means of communication, as WCF demonstrates.
This is a brief glimpse of how a WCF RT Notification Service can use SignalR’s Persistent Connections. In the rest of this article I will focus on Hubs since they are the recommended approach and provide a higher level programming model.
Hubs and Terminology
First, please note that I use Camel Case names for methods and variables that are used on both the client and server since that is required by the SignalR documentation.
Hubs offer a rather simple high level interface. And I must say from using it myself that it is easy to use, resulting in getting a Hub-based system working quickly. Hubs are basically message brokers. They can be a little confusing at first, so I’ll offer some assistance by using the simple mental model presented at the very top of the first page of the document at http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-net-client as follows:
“The SignalR Hubs API enables you to make remote procedure calls (RPCs) from a server to connected clients and from clients to the server. In server code, you define methods that can be called directly by clients, and you call methods that run on the client. In client code, you define methods that can be called from the server, and you call methods that run on the server. SignalR takes care of all of the client-to-server plumbing for you.”
Therefore, Hubs have 2 kinds of methods associated with them: Client Methods and Server Methods.
- Client Methods are callable by the server (from a Hub or other server side code that has a “direct reference” to a Hub). They invoke methods on the client. Think of Client Methods as ServerCallsClient methods, “scc” as a shorthand notation. Client Methods transfer data from server to client via the method’s parameters.
- Server Methods are callable by clients connected to a Hub via its HTTP endpoint (not a WCF endpoint). They invoke methods on the server, i.e. the Hub. Think of Server Methods as ClientCallsServer methods, or “ccs” for short. Take note that server code outside of a Hub cannot call a Server Method defined on a Hub. There is no way to do this.
Any client that is connected to a Hub via a SignalR HTTP connection can invoke a Server Method (ccs) defined on the Hub. For example I could have a client invoke the IsHiEngineTemperatureAlarm() Server Method on a Hub. That method will determine if there is a Hi Engine Temperature Alarm condition present (most likely by making a call outside the Hub object to get the info), and return the boolean result to the client. This is a “normal” client-calls-server remote procedure call, similar to those encountered in WCF.
Later I will show a code example of the above IsHiEngineTemperatureAlarm() Server Method just for purposes of illustration. However, in keeping with the Single Responsibility Principle (SRP) the RT Notification Service should only support one way notifications from the RT Notification Service to its clients. Therefore, the Hubs used by this service should not have Server Methods unless they are used only for notification. For example, we do not want to include a chat application in the RT Notification Service.
In this vein, I am sure that you can see the potential for a web of interrelated Client and Server Methods becoming unmanageable if not governed by SOLID principles. Hubs are excellent mechanisms for separating concerns. The Interface Separation Principle (ISP) is also important here: Favor client-specific fine grained interfaces. Use the SRP plus ISP in defining the Client and Server Methods associated with each Hub. Keep them tightly focused on doing one thing, even if it means that you have a few more Hubs. After all, a Hub is just a class available for encapsulating variation.
Anything on the server that has a “direct reference” to a Hub can invoke a Client Method (scc) method associated with that Hub. This causes the client-side SignalR runtime to execute the specified method on the client, including supplying the client-side method with the data in the Client Method’s arguments. For example the (scc) Client Method UpdateEngineTemperature(404.4) can be invoked via server code having a “direct reference” to the Hub associated with that Client Method. The server-side invocation will cause the client-side SignalR runtime to invoke all registered client-side callbacks that implement that Client Method, and also provide each callback with the incoming argument value of 404.4. The callback code can then take that 404.4 value and deposit it in some UI element that results in the temperature display changing. Clients must register each callback that implements a Client Method to be informed by the client-side SignalR runtime that the Client Method has been invoked from the Hub (server). Here your main focus should be on the concept of Client Methods, rather than the exact details of how they work. Please see the SignalR documentation for how client-side programming works. The key ideas to take away are — Via a “direct reference” to a Hub, server side things can call Client Methods (scc). And Client Methods push data from server to client via the Client Method’s arguments.
The 2nd Way for WCF Services to Utilize SignalR – WCF has a “Direct Reference” to SignalR Hubs
The “direct references” to Hubs that I’ve been referring to are actually references to the InstanceContext of a Hub, of the type IHubContext. They are not normal C# references to Hub classes, like the below InstrumentsHub class.
public class InstrumentsHub : Hub
{
// Server Method (ccs). Shown only as for illustration,
// not to be implemented.
public bool ccsIsHiEngineTemperatureAlarm()
{
return false;
}
}
SignalR will not let you get C# references to Hub objects at runtime. The SignalR runtime itself manages its Hub objects in such a way that a specific Hub may not be instantiated all the time. Instead, SignalR provides the Hub InstanceContex for use by server code as a “direct reference” to a Hub. Only though the InstanceContext can server code execute Client Methods associated with the Hub.
The following lines of C# code demonstrate how to get the InstanceContext of a Hub. Assume that the WCF RT Notification Service contains the below code that gets executed periodically to update engine temperature displays on UI Clients. The below C# code first gets the InstanceContext of a Hub. Then it executes the Client Method called sccUpdateEngineTemperature(decimal temp). Remember, the below code is within the WCF Service.
IHubContext m_InstrumentsHub =
GlobalHost.ConnectionManager.GetHubContext<InstrumentsHub>();
// Server calls Client Method.
m_InstrumentsHub.Clients.All.sccUpdateEngineTemperature(404.4m);
Clients.All is a dynamic object whose members do not get bound until run time. Therefore the dynamic Client Method sccUpdateEngineTemperature() cannot appear in the Instrument Hub source code and can only be accessed via a Hub InstanceContext. As shown by the Instruments Hub code below, Client Methods are not normal C# members of the Hub class they are associated with. The Instruments Hub has only one normal C# method on it, a Server Method called ccsIsHiEngineTemperatureAlarm(). The dynamic sccUpdateEngineTemperature() Client Method does not appear on the Hub’s class definition because it has been defined by the client, not the server. Recall from the description of Hubs and Terminology: “In client code, you define methods that can be called from the server…”
public class InstrumentsHub : Hub
{
// Server Method (ccs). Shown only as an example,
// not to be implemented.
public bool ccsIsHiEngineTemperatureAlarm()
{
return false;
}
// Note that no Dynamic Methods (Client Methods)
// appear on the Hub, namely sccUpdateEngineTemperature().
}
Later I will show how the code of a WCF service can access dynamic Client Methods through a C# interface.
Also note that the Hub InstanceContext exposes only the dynamic objects used to call the Client Methods, and does not expose any Server Methods. For example m_InstrumentsHub only exposes the “Clients” and “Groups” dynamic objects, which in turn expose other dynamic objects like “All”. Thus, there is no way for server code outside a Hub class to call Server Methods. Only a client can call Server Methods via the HTTP connection to the associated SignalR Hub. For example, server code cannot call ccsIsHiEngineTemperatureAlarm() in the above class.
The fact that the members of dynamic objects are actually named by strings has far reaching effects. Consider the following code where server code calls a Client Method:
m_InstrumentsHub.Clients.All.sccUpdateEngineTemperature(404.4m);
The complier will turn the Client Method name of sccUpdateEngineTemperature into a string when it generates code. Instead of the above code, you can also use something like below to invoke a Client Method in source code from the server:
IClientProxy proxy = m_InstrumentsHub.Clients.All;
proxy.Invoke("sccUpdateEngineTemperature", 404.4m);
Note the fragility associated with Client Method Name Strings. These strings are shared by the client and server code, and can easily get out of sync, i.e. changed on one place but not identically changed in the other place, thus breaking the code and causing errors. I’ll cover this in more depth shortly.
Finally, to use GlobalHost.ConnectionManager.GetHubContext<T>() to get a Hub InstanceContext, the server code must be in the same AppDomain as the SignalR code (containing the Hubs, the HTTP Connection, and server side SignalR Runtime). This means that the WCF RT Notification Service and the SignalR instance and its Hubs must share the same App Domain, typically by sharing the same Host process — the WCF Service Host.
To summarize so far, the first way for WCF services to utilize SignalR is to use Persistient Connections. The 2nd Way for WCF services to utilize SignalR is for the WCF service to share its ServiceHost process with SignalR so the service has a reference to the InstanceContexts of Hubs to invoke Client Methods.
Here are some important things to note about a WCF service using SignalR via references to Hub InstanceContexts:
- Due to the Client Methods being implemented on dynamic objects and their late binding at runtime, the SignalR Hub does not provide a C# Interface or service contract for Client Methods that a WCF service can program against.
- The Client Method Name Strings are used on both the client (to register subscriptions for callbacks in JavaScript and .NET clients) and the server (as the names of the dynamic Client Methods). Since this set of Client Method Name Strings is shared between the client and server they represent a significant source of variability (aka volatility) and fragility that can significantly increase costs, as follows:
Assume a developer has a Client Method called sccUpdateFoo(someValue) and has implemented one or more callbacks on a client using the string “sccUpdateFoo” to register the JavaScript or C# event handler callback(s) with the client SignalR runtime and/or proxy. Then assume the developer changes the name of the Client Method on the server to UpdateFoo(someValue) since they are tired of dealing with my Hungarian Notation. And, they neglect to update the related strings on the client from their original “sccUpdateFoo”. This will break the code and there will be absolutely no indication of such other than the fact that UpdateFoo() will never be executed on the client. It will silently fail. I present more on this topic at the end of this article.
It is easy to fix 1 above, by providing a C# “Client Methods interface” for WCF to program against. In addition to supporting interface based programming that simplifies the testability of the WCF service, this interface encapsulates the variability of Client Method names This minimizes the footprint of where the Client Method Name Strings appear in the server code, reducing the fragility of the server code. The “Client Methods interface” is implemented by a ClientMethodsProxy class whose members will be the only things on the server that call the actual Client Methods, either by Invoke() or via the Clients.All.someMethodName(), or one of its variants. Thus, the ClientMethodsProxy is the only place where the Client Method Name Strings appear in the server side code. Here’s the code:
public interface IInstrumentsHubClientMethods
{
void SccUpdateEngineTemp(decimal measData);
}
public class InstrumentsHubClientMethodsProxy : IInstrumentsHubClientMethods
{
IHubContext m_InstrumentsHubContext;
public InstrumentsHubClientMethodsProxy(IHubContext hubContext)
{
m_InstrumentsHubContext = hubContext;
}
#region IInstrumentsHubClientMethods Members
public void SccUpdateEngineTemperature(decimal measData)
{
m_InstrumentsHubContext.Clients.All.sccUpdateEngineTemperature(measData);
}
#endregion
}
The WCF service calls the Client Method through the proxy as follows:
IHubContext m_InstrumentsHub =
GlobalHost.ConnectionManager.GetHubContext<InstrumentsHub>();
IInstrumentsHubClientMethods m_InstrumentsHubClientMethods =
new InstrumentsHubClientMethodsProxy(m_InstrumentsHub);
m_InstrumentsHubClientMethods.SccUpdateEngineTemperature(measurement.Data);
With the above code, all of the Client Method Name Strings on the server are now encapsulated within the InstrumentsHubClientMethodsProxy class. No longer are they sprinkled about the code of the WCF service. And, the WCF service can program its calls to the Client Methods against an interface, as opposed to against a concrete implementation. This is an application of the Dependency Inversion Principle (DIP): Depend on abstractions rather than concrete implementations. Above, the WCF service code depends upon the IIstrumentsHubClientMethods interface (an abstraction) rather than directly on strings. Among other things, using the DIP greatly enhances the testability of the WCF service code apart from the Hub code. Note that the 3rd Way presented below will also use the above interfaced programming technique, albeit with a different implementation of the IInstrumentsHubClientMethods interface.
How to deal with the Client Method Names on the client? I’ll take that up at the end of this article.
The 3rd Way for WCF Services to Utilize SignalR – The WCF Service is a Client of SignalR Hubs via their HTTP Endpoint
In the list of the 3 Ways a WCF Service can use SignalR at the beginning of this article I described the 3rd Way as follows:
The WCF service uses one or more Hubs as a client of SignalR, connecting to the Hub(s) via SignalR’s HTTP endpoint “connection”. Then, through the HTTP connection as a client, the WCF service calls the Hub’s “Server Methods” as the means to push data.
While the 3rd Way is modestly more complex than the 2nd Way, the 3rd Way does provide key differences over the 2nd Way which may be beneficial in some situations. Most of the differences revolve around hosting, and while that is a large topic that I do not want to dive into, the following items are worth considering.
- The 3rd Way probably hosts the WCF RT Notification Service separately from SignalR. It is conceivable that both could be hosted in the same process and the WCF service would connect to SignalR as a client, rather than using its Hub’s InstanceContexts. But this is unlikely. What would be the gain over the 2nd Way?
- When the WCF service is hosted separately from SignalR you gain scalability, flexibility, and plus process isolation. Process isolation might pay off if the SignalR host process crashed, leaving the WCF service running and able to adapt to a crashed SignalR, perhaps by saving the incoming push requests to disk, etc. There may be other benefits as well along this line of thinking. Flexibility increases since you can now run the different hosts in the cloud, in IIS, in a Windows Service, or on a different server or virtual machine. And scalability increases as well.
- Another reason to host SignalR separately from the WCF RT Notification Service is to be able to use the SignalR host for multiple purposes – Use some Hubs for the WCF RT Notification Service, and other Hubs for other things like a chat facility, a Debug and Test Hub, etc. In other words, consolidating all the SignalR hubs into a single host.
- Finally, the 3rd Way’s use of SignalR provides an HTTP endpoint (although not a WCF endpoint) for the WCF service to connect to. This is more loosely coupled than interacting via a “direct reference” to a Hub InstanceContex.
As you can see, breaking apart the hosting of the WCF Service from SignalR creates new opportunities and flexibility. And this is achieved at a modest cost, as shown below.
Changes Required for the 3rd Way
Note that here are 2 kinds of SignalR clients in the 3rd Way:
- The same UI Clients as in the 2nd Way, which still connect to the InstrumentsHub to have their Client Methods invoked via the InstanceContext of the InstrumentsHub. This has not changed from the 2nd Way.
- New in the 3rd Way is the WCF RT Notification Service connected as a client to a new Hub so it can invoke the new Hub’s Server Methods, which then invoke the Client Methods of the InstrumentsHub via the InstanceContext of the InstrumentsHub. Here one Hub’s Server Methods invoke the Client Methods associated with a different Hub, the InstrumentsClientMethodsAsServerMethodsHub.
Here is the code for the new Hub that the WCF service now connects to as a client:
public class InstrumentsClientMethodsAsServerMethodsHub : Hub, IInstrumentsHubClientMethods
{
IHubContext m_InstrumentsHub =
GlobalHost.ConnectionManager.GetHubContext<InstrumentsHub>();
public void SccUpdateEngineTemperature(decimal measData)
{
m_InstrumentsHub.Clients.All.sccUpdateEngineTemperature(measData);
}
}
Above, note that the new InstrumentsClientMethodsAsServerMethodsHub implements the IInstrumentsHubClientMethods interface that was used in the 2nd Way, for the same reasons that this interface was used there. And, the Server Method on that Hub calls a Client Method associated with another Hub to do the data push.
As was done in the 2nd Way, in the code for the WCF RT Notification Service project contains a proxy that implements the IInstrumentsHubClientMethods interface. Again this interface acts to 1) provide interface based programing per the Dependency Inversion Principle, and 2) consolidate all the Method Name Strings into one place as done in 2nd Way. In this case, however, the strings are names of the Server Methods on the new InstrumentsClientMethodsAsServerMethodsHub. This is due to the requirement of the SignalR .NET client runtime that Invoke() be used to run the Server Methods as shown below. Here is the code for the new proxy:
public class InstrumentsHubClientMethodsProxyWay3 : IInstrumentsHubClientMethods
{
IHubProxy m_InstrumentsHubClientMethodsAsServerMethodsProxy;
public InstrumentsHubClientMethodsProxyWay3(IHubProxy hubProxy)
{
m_InstrumentsHubClientMethodsAsServerMethodsProxy = hubProxy;
}
#region IInstrumentsHubClientMethods Members
public void SccUpdateEngineTemperature(decimal measData)
{
m_InstrumentsHubClientMethodsAsServerMethodsProxy.Invoke("SccUpdateEngineTemperature",
measData)
.Wait();
}
#endregion
}
The code in the WCF RT Notification Service that makes the connection to the new Hub via its HTTP endpoint and invokes the above proxy, as follows:
public class RtNotificationServiceWay3 : INrtInstrumentation
{
IHubProxy m_InstrumentsHubClientMethodsAsServerMethodsProxy;
IInstrumentsHubClientMethods m_HubClientMethods;
public RtNotificationServiceWay3()
{
var signalRHubConnection = new HubConnection("http://localhost:1234/");
m_InstrumentsHubClientMethodsAsServerMethodsProxy =
signalRHubConnection.CreateHubProxy("InstrumentsClientMethodsAsServerMethodsHub");
m_HubClientMethods =
new InstrumentsHubClientMethodsProxyWay3(
m_InstrumentsHubClientMethodsAsServerMethodsProxy);
// Start the connection.
signalRHubConnection.Start().Wait();
}
m_HubClientMethods.SccUpdateEngineTemperature(404.4);
}
}
As you can see, the differences between the Hub and WCF Service code between the 2nd Way and 3rd Way are small.
Additionally, now there needs to be 2 hosts instead of one – One for the WCF Service and one for SignalR. The code to make 2 hosts is fairly simple as well, and I’ll leave that for you to figure out. See the SignalR tutorials on self-hosting and hosting SignalR in an ASP.NET web site.
The 3rd Way provides much more flexibility at a modest cost over the 2nd Way. However, the 3rd Way may have a higher risk of connection problems via the SignalR HTTP connection used by the WCF service. This risk was not present in the 2nd Way.
So there you have it. Which way is best for you?
One implementation detail to be aware is that both the 2nd and 3rd Ways can utilize a custom WCF ServiceBehavior that the WCF Service Host adds to its Service Description.Behaviors collection. When WCF instantiates the WCF RT Notification Service instance this custom ServiceBehavior will cause either a Hub InstanceContext or a Hub HTTP Connection IHubProxy instance to be injected into a non-default constructor of the service instance. This is way beyond the scope of this article, but is worth exploring as you will see from an in depth read of the SignalR documentation.
What about Those Fragile Client Method Name Strings?
In all of the 3 Ways of WCF using SignalR, the Client Method Name strings are shared between the client and server and represent a point of fragility. At best it can be minimized by design, but it will still be there and it will grow with every new Client Method added to a Hub. Here are ways to mitigate the risks these strings create:
- Always build a Client Method Name String Verification Tester, and run it after every build. It should test each Client Method Name String used on the server and verify that method gets duly executed with the proper arguments on the client.
- Anton Kropp offers some insight into ways to detect mismatches of Client Method Name Strings on the server and client that result in silently failing broken code, plus other ideas to fix this problem. See his blog at http://onoffswitch.net/strongly-typing-signalr/ for more info and code downloads.
Afterword
I really like SignalR, its 2 APIs, its ease of development, and the Real-Time Push capabilities it provides. Thank you to Microsoft and the SignalR team for providing us with this framework. I am looking forward to using it in the future to deliver significant value to users.
I hope you benefit as much from reading this article as I did researching and writing it. Your comments are appreciated.
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
This article focuses on using AngularJS as the web client UI technology on the receiving end of SignalR data pushes.
For the past few weeks I’ve been engaged in a fascinating in depth exploratory project to learn how to best use SignalR in various situations. SignalR is Microsoft’s “real time push” framework introduced in 2011. Version 2.0 was released a couple weeks ago on October 17, 2013. SignalR provides the capability of server side software being able to push data to clients of all types: Web Clients running in a browser, and .NET clients like WPF apps, Console Apps, or Windows Services, plus others (see the SignalR web site). Thus, due to the request/response nature of the HTTP that Web Clients use, for the first time there is an effective and very easy-to-use way for a server to notify Web Clients when something of interest happens on the server.
For example, SignalR makes updating a client UI from the server easy with real time data from:
- Instrumentation and alarms on machinery, or stock market feeds, or from business intelligence.
- Real time text notifications to users of things happening they want to know about. For example, “The Budget Report you ran this morning is now ready.” Or “There are donuts available in Conference Room 2.”
- Chat and message board apps.
From working with SignalR for the better part of a month, it is clear to me that this technology will become widely used. Indeed, at this very moment there are probably thousands of dashboards crying out for SignalR.
Microsoft Info Sources
Read more about the capabilities of SignalR at the Microsoft ASP.NET SignalR website: http://www.asp.net/signalr. Especially see “Introduction fo SignalR” for an excellent description — http://www.asp.net/signalr/overview/signalr-20/getting-started-with-signalr-20/introduction-to-signalr
For detailed info on how to write the code for Web Clients please see this link: http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-javascript-client.
I suggest that you read one of the Tutorials and download its code. They are typically very simple.
Finally, it was a great help to me to read this link, so as to gain an understanding of Hubs and how to use them: http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server.
Useful Info Sources for AngularJS Clients of SignalR
In my exploratory app I implemented both a traditional jQuery web client for SignalR, and also an AngularJS SPA (Single Page App) web client. You will find examples of simple jQuery web clients for SignalR in the ASP.NET SignalR site’s tutorials.
Here are a couple links I found very useful in implementing an AngularJS web client for SignalR:
This link shows a super simple example of an AngularJS client using SignalR: http://code2thought.blogspot.com/2013/09/doing-signalr-angularjs-way.html
Ingo Rammer and Christian Weyer have a very helpful free ebook devoted to AngularJS and .NET. One of their chapters is devoted to AngularJS and SignalR: http://henriquat.re/server-integration/signalr/integrateWithSignalRHubs.html
One key thing to note in the above code is how well the AngularJS service for talking to SignalR separates that concern, from the jobs done by the Controllers that utilize the service. This creates tight cohesion in both the Controllers and the SignalR service. And, since AngularJS services are Singletons, this service not only encapsulates behavior, but also encapsulates the state of the service that is shared amongst all the Controllers utilizing the service. Finally, since Angular services are dependency injected into controllers (via a Service Locator built in to the AngularJS framework), there is loose coupling between the Controller and the service it uses. This allows for simple mocking and testing of the Controllers, separate from the service. The above characteristics are common throughout AngularJS since one of the highest priority design goals was to produce very high levels of testability in AngularJS clients.
How to Unregister Event Handlers to Prevent Memory Leaks in AngularJS Controllers
When SignalR pushes a piece of data to a Web Client, the SignalR runtime on the client raises an event saying the data has arrived. Thus, Web Clients must register subscriptions to these events in order to received the pushed data and process it.
With SPA’s you need to be fanatically concerned with unregistering ALL of the event handlers you register in order to:
- Prevent memory leaks, and
- Prevent performance degradations over time as more and more not-unregistered event handlers build up with each partial-page-navigation. Each one responds to its event and does the work it is designed to do even though the element they are registered on is no longer in use. Thus many CPU cycles are wasted. This can degrade the performance of an app to a crawl after a number of hours of heavy use.
Non-SPA web clients replace all their JavaScript code on each page-navigation – all the registered event handlers and their references are gone! No memory leaks, no problem. So the effects of not unregistering event handlers may seldom be apparent in a non-SPA.
With SPAs, however, the same JavaScript will hang around for multiple navigations to and from the partial pages inherent in SPAs. Thus, unregistering event handlers on partial page-navigations is an absolute requirement just as it is in Silverlight, WPF, and WinForms apps. The rule is “If you register an event handler, you must also unregister it before garbage collection of the element it was registered in. Period.” And writing the event handler unregistration code at development time is way, way less painful and expensive than trying to chase down memory leaks and performance bottlenecks just before (or even after, gulp) the code has been released into the wild.
Here is the code you can put in an AngularJS Controller that unregisters the event handlers that were registered in the Controller. This trivial code handles the Angular $destroy event that is raised just before the Controller is garbage collected.
$scope.$on(‘$destory’, function () {
// Here is the place for the code that unregisters each event handler
// that was registered in the Controller, one by one.
};
Note that I have verified this works to unregister handlers for AngularJS events. But I have NOT yet used it to unregister JavaScript or DOM event handlers. Caveat emptor!
I hope you find this article helpful.
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
Since I’m short on time right now due to being involved in a couple of fascinating software projects, I’m going to have a short article this month. Since posting last month’s article concerning helpful Angular Learning Sources, a couple more good sources have appeared. Both are from Jeremy Likness’ blog “C#er Image”, as follows:
1. His post of 9-19-2013 “10 Reasons Web Developers Should Learn AngularJS” passes on the knowledge Jeremy has gained from working on a substantial multi-person new development project using AngularJS with TypeScript. It’s not often you get such timely information on an emerging technology from a seasoned professional practitioner in that technology. Don’t miss this one! http://csharperimage.jeremylikness.com/2013/09/10-reasons-web-developers-should-learn.html
2. Jeremy’s post of 9-9-2012 “Synergy Between Services and Directives in AngularJS” demonstrates not only this topic, but also how a TypeScript interface can be used to decouple software parts from each other (i.e. program to an interface rather than an implementation) in AngualrJS. http://csharperimage.jeremylikness.com/2013/09/synergy-between-services-and-directives.html
Thanks to Jeremy Likness for taking the time to write such useful articles.
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
Since posting the “.NET Single Page Applications (SPA): Helpful Info Sources” article in this blog in late July 2013, I’ve devoted my time to learning how to use the AngularJS SPA development framework. In the July blog article I identified AngularJS as one of the leading emerging SPA libraries.
The reason I’ve elected to learn AngularJS is that I have come to believe it is an exceptional framework for developing fairly SOLID client-side Rich Internet Applications (RIA) that may be small (a few screens), medium (up to 15 – 20 screens), or large (in excess of 20 screens). Based on my 5 years of RIA development in Silverlight, developing an app with more than 20 or so screens can be very time consuming and difficult unless the development framework supports an excellent separation of concerns. With Silverlight the MVVM pattern went a long way to facilitate the development of large RIAs, as did the use of Repositories, Dependency Injection, loosely coupled messaging/eventing, loosely coupled commands, and Prism style Regions, etc. All of these patterns support the SOLID principles in one or more ways. See Wikipedia for a definition of SOLID principles at http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29.
In short, when a client-side RIA development framework supports the implementation of SOLID code, development of large, rich internet apps becomes cost effectively attractive. Without such support, a large app will become more, and more difficult and costly to develop and maintain as its size increases due to an ineffective separation of concerns. And, unfortunately, this expansion in difficulty and cost is exponential due to the non-linear behavior of increasing code complexity without a strong separation of concerns.
AngularJS was developed explicitly to have very high testability of its apps. As a result, AngularJS has a fabulous separation of concerns built into it that is easily used by a developer to build SOLID apps, especially when using TypeScript to provide interfaces. Even without TypeScript’s interfaces, however, I am very impressed by how extensively AngularJS supports developing apps with a great separation of concerns. This means AngularJS apps have a high potential of being cost effectively testable, extensible, and maintainable.
Given the above, plus the large sustained demand for RIAs (both for end user apps and Line-of-Business apps) I have come to believe that AngularJS has a high probability of becoming a leading development platform for RIAs.
The following factors will act to lower the risk of choosing AngularJS as an RIA development framework at this point in time:
- AngularJS is a Google product with a full dev team supporting it, although it is also open source. The same goes for its testing frameworks Karma and End-To-End Tests, and for its AngularUI add on library. Thus it will not “go stale” after a few years as do many open source projects. And you do not have to cobble together a bunch open source JavaScript libraries to effectively use AngularJS. This increases the stability of the app during development and beyond into maintenance.
- There are currently no other JavaScript libraries that even come close to providing the features of AngularJS. AngularJS is truly unique, and not just a library but a full featured client-side framework complete with its own run-time that makes possible its 2 way data binding and its ability to extend HTML5, plus making HTML easily dynamic.
- Once you “get it”, writing code with AngularJS often produces nice GUIs surprisingly quickly, in many cases without writing nearly as much code as one would write in Silverlight for a similar feature.
- Like Silverlight, AngularJS supports the development of simple to complex “user controls” and “custom controls”, and reusable libraries thereof. Plus it supports developing reusable libraries of non-GUI elements like client-side services and providers. And, Dependency Injection is built into AngularJS so that such client-side services, providers, etc. can be cleanly injected into client code and client-side MVC controllers.
- AngularJS can also be advantageously used to develop non-SPAs, i.e. to develop regular web apps that transfer an entire page during each user navigation. With AngularJS, you get a good potential for a solid separation of concerns in such an app.
- A growing number of .NET RIA developers and MVP’s that were leaders in the ramp up of Silverlight in 2008 – 2010 are currently blogging about AngularJS and developing video courses to aid in learning Angular. At this point they are:
- Jeremy Likness — http://csharperimage.jeremylikness.com/
- Dan Wahlin — http://weblogs.asp.net/dwahlin/
- Shawn Wildermuth — http://wildermuth.com/
Time will tell about the popularity of AngularJS. If you want to learn AngularJS, I’ve found the following info sources to be helpful in learning AngularJS myself.
If you had to read only 2 documents to get a sold conceptual view of AngularJS before diving in to the details, read these:
http://docs.angularjs.org/guide/overview and http://docs.angularjs.org/guide/concepts which has great diagrams.
If you could watch only one free video to get a good overview of what AngularJS development is about at both the very detailed level, and also at a more high level view, watch this video by Dan Wahlin. I consider this a must to quickly “get it”!
If you could use only one free video course to get a good understanding of the key elements of AngularJS at a very low level, watch each of John Lindquist’s free 46 mini-courses of roughly 5 minutes each:
If you could take one or two paid video courses to learn AngularJS, PluralSight has a couple:
“AngularJS Fundamentals” by Jim Cooper and Joe Eams.
“Building a Site with Bootstrap, Angular, ASP.NET, EF, and Azure” by Shawn Wildermuth.
If you do take the Eams and Cooper “AngularJS Fundamentals” course, be sure to view this blog for detailed assistance in how to put together the development tools.
http://bardevblog.wordpress.com/2013/07/28/setting-up-angularjs-angular-seed-node-js-and-karma/
If you want to know what seasoned developers of web sites and web apps in JavaScript and jQuery think of AngularJS, read these blogs.
http://bardevblog.wordpress.com/2013/08/03/how-i-navigated-to-angularjs/
http://blog.artlogic.com/2013/05/02/ive-been-doing-it-wrong-part-1-of-3/
I hope you find these sources as helpful as I did.
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
During the past 4 weeks I’ve been increasingly exploring Single Page Applications (SPAs) in .NET. Up until last week all of my exploration was reading blogs and other online sources about what an SPA is, and the various architectures and software packages used to build them. Last week I downloaded the .NET MVC4 SPA Angular/Breeze Visual Studio project template. It contains an example of an SPA, and I went through a minutely detailed code walk through for a couple days, looking at every part of the example’s code. I was guided by the excellent tutorial line on the Microsoft download page for this template (listed below).
Before I share the best online SPA info sources I’ve found, it is worthwhile to view the rapid rise of SPAs over the past 18 months. It will give you a sense of the role SPAs will play in the next few years.
In the last 5 years or so the popularity of Silverlight and Flash has demonstrated the strong, wide demand for Rich Internet Applications (RIAs). Silverlight was also popular with developers and software production companies to build RIAs with since it easily supports a strong separation of concerns with the MVVM pattern, Dependency Injection/Ioc, Repositories, and more. A strong separation of concerns means software is less expensive to develop and thoroughly test, and less expensive to extend in the future. Mind you, on average the post-initial-release maintenance and extension work consists of about 60% of the Total Cost of Ownership to a software producer. Not small potatoes! For the source of this 60% number, please see my article about cost of ownership at https://dotnetsilverlightprism.wordpress.com/2012/02/19/the-relationship-between-software-structure-and-the-softwares-value-to-a-business/
In fall of 2011, as part of its initiative for Windows 8 and WinRT, Microsoft announced that it would no longer release subsequent upgrades of Silverlight beyond the just-released version 5. And, Microsoft would continue to support Silverlight for 10 years. The result of this announcement was 1) Many software development organizations re-examined their RIA strategies and moved away from Silverlight to the HTML5/JavaScript technologies Microsoft was now touting for RIAs, and 2) Many developers who had lead the move into Silverlight for RIA development moved away to other technologies as well – many to the HTML5/JavaScript technologies.
The below charts (reproduced with permission from Indeed.com) show the fall of Silverlight and rise of HTML5, expressed as the number of jobs on Indeed containing either “Silverlight” or “HTML5 and .NET” in the job description text, as a percentage of all jobs on Indeed at the time. The trends are very clear.
Software developers are very inventive. Given the continued strong demand for Rich Internet Applications, and the difficulties of working with raw HTML5/Javascript/jQuery to develop RIAs, people started developing more and more JavaScript libraries that made the development of RIAs in this medium easier, with better separation of concerns, and better testability. An early example of one of these libraries is KnockOut, which brings the popular and useful MVVM pattern to JavaScript clients. Now, 18 months since the fall of Silverlight began, a plethora of such JavaScript libraries has been developed. Below are the Indeed charts for several of them.
The rate of increase looks very similar to that of Silverlight during its steepest ascent.
Now enter the “Single Page Application”. An SPA is built using these kinds of JavaScript libraries (Please see the below Wikipedia link for a clear definition of what an SPA is). Today, some SPAs are beginning to look more and more like a Silverlight-type smart client app in terms of their internal software structure and separation of concerns. This is especially true for SPAs utilizing the Breeze library (for client side data caching and data management) and the Angular library (for a strong separation of concerns, including completely relieving the client JavaScript code from having to manipulate the DOM. Rather simple Angular abstractions are manipulated by JavaScript to cause changes to the View).
By way of summary, within the period of 18 months there has been a flurry of innovation in the HTML5/JavaScript arena. It has produced some very promising technologies in terms of being able to support rich, engaging user experiences in web browsers, while also supporting high productivity software development techniques. This makes the future of web apps in general, and SPAs in particular, look quite promising.
I found the following info sources to be very helpful in learning about what SPAs are, and the characteristics of the various JavaScript libraries they are built with.
What is an SPA?
1. http://en.wikipedia.org/wiki/Single_page_application
What is the architecture of an SPA and the JavaScript libraries they are built with? Note that SPAs are sometimes referred to as being in the MV* (MV Star) category.
1. http://coding.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/
3. The venerable http://knockoutjs.com/
What are some of the JavaScript libraries with exceptional separation of concerns?
and http://docs.angularjs.org/guide/overview
and http://docs.angularjs.org/guide/concepts
and http://www.breezejs.com/documentation/introduction
Where can I get .NET MVC4 Visual Studio project templates for SPAs?
1. http://www.asp.net/single-page-application/overview/introduction/other-libraries
2. The template for Angular/Breeze has a deep tutorial for the example app that comes with the template. The template is here: http://www.asp.net/single-page-application/overview/templates/breezeangular-template
Here is the link to the deep tutorial. http://www.breezejs.com/ng-spa-template?utm_source=ms-spa
How does Microsoft’s TypeScript fit in with SPAs and these kind of JavaScript libraries?
1. Please see the last part of Jeremy Likness’ blog article titled 30 Years of “Hello, World” for how they fit together. Start with the Silverlight section of the July 8, 2013 article, and continue through the TypeScript section to the end.
2. More details: http://www.piotrwalat.net/using-typescript-with-angularjs-and-web-api/
I hope you find these sources as helpful, and interesting, as I did.
Finally, some food for thought:
1. Given the innovation in the last 18 months or so, what will this space look like in a year? In 3 years?
2. Who are the emerging leaders? What kind of value to developers and end users do the various JavaScript libraries provide?
3. Most all of the libraries are open source. How will that impact niche leadership?
4. Except for TypeScript and some JavaScript Library specific SPA MVC4 project templates, Microsoft is notably absent from this space. Why?
It will be fascinating to see how things play out in this rapidly innovating niche.
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
In the spring of 2010 I ran across the new Reactive Extensions (Rx) and was impressed with their potential. But I did not have the time to do much more than read about their capabilities. I pushed the task of learning more on my stack.
A couple of weeks ago I had reason to pop Rx off of my “to learn” stack and spent some quality time looking through the body of introductory literature that is now quite large and useful. I’ll share the links I found most helpful shortly. You GOTTA check out Rx!
If you are on the receiving end of a stream of data or events, then the Reactive Extensions can likely save you literally hours of coding! No kidding. In 10 to 15 lines of Rx code you can produce functionality to process a stream of data or events that could take hours to write the code for in regular old C#. Rx uses LINQ to make this possible.
Do you want to process a stream of MouseMove events in the UI? Need to process and buffer data feeds asynchronously? Are you on the receiving end of data coming in from WebSockets that needs to be aggregated, grouped, or sorted? Then you GOTTA check out Rx! Read the below links.
The MVP Mark Michaelis and Alan Greaves have recently written an extremely good article that clearly demonstrates the power of Rx, and their ease of use:
http://visualstudiomagazine.com/articles/2013/04/10/essential-reactive-extensions.aspx
There is a really nice looking eBook introducing Rx, and teaching you how to use these Reactive Extensions by Lee Campbell. It is free online, as a web site at http://www.introtorx.com/ or you can download it to your Kindle from that site. Be sure to read Part 1 — Getting Started: Why Rx? It says when to use Rx, and when not to.
Finally, Microsoft has a wealth of learning materials, samples, videos, etc at http://msdn.microsoft.com/en-us/data/gg577609
Hope this helps,
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
For 4 1/2 of the 7 months during November 2012 and May 2013, I worked full time doing a deep dive into Service Oriented Architecture using WCF 4.5. The other 2 1/2 months I spent programming select topics in WPF, MEF, and Dependency Injection in .NET 4.5. All this was driven by my passion for building extensible apps, plus my long standing interest in cost effective software structure and architecture.
Two months were spent attending the IDesign Architect Master Class and Architects Clinic, each a week long, to learn the best practices for architecting robust, extensible, scalable Service Oriented apps with WCF. Then I spent 2 1/2 months writing code to implement this architecture using the following IDesign patterns:
- WCF Service as a Manager, Engine, or Resource Accessor (You’ll have to find about these on your own. Not enough space here.)
- Queued Pub/Sub Pattern – Facilitates a WCF service (the publisher) in sending various messages to one or more subscribing WCF services via the Publish/Subscribe pattern, aka Observer design pattern.
- Message Bus – Uses a variation of the Queued Pub/Sub pattern to allow numerous WCF services to send and receive messages between one another.
- Workflow Manager – A pattern that allows variations in a Use Case, in a Business Workflow, or even in a Client UX Workflow to be encapsulated on the server within a single WCF service. This Workflow Manager WCF service then can invoke one of several “workflows” (which may or may not be state machines) it has access to on the server, based upon input from the client. A very simple example is a WCF service using a Strategy design pattern that does Income Tax Calculations appropriate to the state of residence in the WCF service operation’s arguments.
I found this deep dive fascinating, to say the least. Designing and programming multiple examples of the above patterns using the IDesign Architecture pushed me into advanced areas of WCF that I’d never known existed. And, I learned so much – not only about architecture, WCF, and its various extensibility points. But also about the useful tools in IDesign’s ServiceModelEx library, how to integrate WPF Clients with this architecture, plus how to use the Manage Extensibility Framework to make key things easily extensible on both the Client and Server.
During this period I was on my own to come up with info sources to aid me in forging in to unknown areas. Below I provide a list of the information sources I found most useful in the WCF part of my deep dive. If you look through previous articles in this blog you can find several similar articles on good info sources I also found in this deep dive — For MEF and for using INotifyDataErrorInfo in WPF.
Helpful WCF Info Sources
1. Learning WCF by Michele Leroux Bustamante, O’Reilly, 2007.
This book a great place to start climbing up the WCF learning curve. Although I’d consumed WCF services in Silverlight clients for several years prior to this, plus created several WCF services as well, I had just scratched the surface. After I worked the majority of the labs in this book during the course of a week, I had a much better grasp of WCF and its terminology and concepts, plus a real boost in my confidence in my ability to use the framework in new ways that were out of my previous comfort zone.
2. Programming WCF Services, 3rd Edition, by Juval Lowy, O’Reilly, 2010.
This book is key to both learning the basics, plus the advanced features of WCF and SOA. If there was only one book or article I could read about WCF, this is the one I’d read. Be sure to read the appendices for advanced topics like the Pub/Sub pattern, Message Headers and Contexts, etc.
3. Juval Lowy’s IDesign web site contains a wealth of free downloadable WCF software and examples, plus information about the architecture design services and the top notch classes he offers.
Here’s the link: http://www.idesign.net/Downloads. You can also get his library of WCF utilities and extensions called ServiceModelEx. On the Downloads page enter “Essentials” into the “Filter by Category” box. Then scroll down till you see ServiceModelEx and click the link to get to the download screen. I found the GenericResolver, the Pub/Sub utility and its persistent subscription manager, plus the TransactionalDictionary quite useful among the several I used from in ServiceModelEx.
4. MSDN Magazine has published a number of excellent articles on WCF over the years. I’ve found them to be good at giving me a clear conceptual understand of various aspects of WCF.
Here is the link to the MSDN Magazine Author’s Articles index for Juval Lowy, who has written many useful articles over the years. http://msdn.microsoft.com/en-us/magazine/ee532098.aspx?sdmr=JuvalLowy&sdmi=authors
Here are the links to several articles by Aaron Skonnard that I found informative:
- WCF Messaging Fundamentals: http://msdn.microsoft.com/en-us/magazine/cc163447.aspx
- WCF Bindings In Depth: http://msdn.microsoft.com/en-us/magazine/cc163394.aspx
- WCF Addressing In Depth: http://msdn.microsoft.com/en-us/magazine/cc163412.aspx
- Extending WCF with Custom Behaviors: http://msdn.microsoft.com/en-us/magazine/cc163302.aspx
5. Finally, I found the series of articles on WCF Extensibility in the blog of Carlos Figurea to be very useful. They typically come with a code example. Carlos is a Senior Software Development Engineer in Test for Microsoft, currently working on Azure. He previously worked on WCF and the ASP.NET Web API among other projects. You can find the Index Page to his article on WCF Extensibility at:
http://blogs.msdn.com/b/carlosfigueira/archive/2011/03/14/wcf-extensibility.aspx
I hope you find these sources as helpful as I did.
George Stevens
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.

