Pages

Monday, October 8, 2012

Ways to Make Top Performers Effective Managers


It's a common scenario: A managerial position becomes available and is filled by a top performer with minimal or no previous management experience. Yet it makes sense. Shouldn't a top performer be able to easily make the transition to manager? Shouldn't that person be able to guide others to his or her same level of productivity? The answer is a 100 percent, absolute maybe.
While top performers likely have solid domain skills, coupled with a strong motivation to succeed, there's a good chance they have not been afforded sufficient opportunity to develop effective management techniques. For some, these skills can be learned on the job. For others, the consequences of a poor managerial fit can be significant in terms of lost productivity and morale for the new manager and his or her direct reports.
Therefore, prior to promoting a top performer with minimal or no managerial experience, assess the candidate's strengths and forward-looking potential in nine core areas of effective management.
This analysis can ensure consistently smooth management transitions and keep a company operating at peak performance as it identifies whether a top performer is ready to lead now, is better-suited for some limited managerial experiences and additional training, or perhaps has a skill set and disposition that will only thrive in an individual contributor role. Consider: Can the new manager execute these nine core skills?

1. Move from tactical to strategic.
Is the employee ready to let go of his or her day-to-day responsibilities and play a more conceptual or strategic role? Some managers believe they need to understand every last detail of what their employees are working on.
Commonly referred to as "micro-managing," this type of behavior can make otherwise content employees burn out and leave a company. For a top performer who excels at the tactical level, managing others to achieve the same level of success may not seem as fulfilling.
Is the employee prepared for this potential shock? Many top performers are capable of the transition from tactical to strategic thinking, provided they have access to the right resources, such as a mentor or applicable management training courses.

2. Defend the team.
Is the employee ready to defend his or her new direct reports and support them in public? Is the employee ready to be a leader? Leaders absorb rather than deflect criticism. Leaders push praise downward to their employees and proactively look for ways to portray their direct reports in a positive light.
In short, leaders have a deep understanding of the phrase, "praise in public, condemn in private." Lots of top performers have healthy, competitive egos. Don't assume that deflecting praise and supporting direct reports is a natural instinct for new managers.

3. Build trusting relationships.
Can the employee develop a strong, trusting relationship that engenders compassion and prudent responses to change? As a cautionary tale, "Jerry" really enjoyed working for a manager until the reasons behind some recent absences came into question.
Jerry's son was in and out of the hospital, and thus, he needed to unexpectedly miss some work during a two-week period. Rather than show compassion and understanding, Jerry's manager accused him of interviewing. The manager's paranoia quickly became a self-fulfilling prophecy, as Jerry decided it wasn't worth working for someone who so quickly questioned his integrity. Jerry's example illustrates the risk associated with promoting a top performer before understanding his or her ability to trust and respect others.

4. Delegate.
Does the employee know how to assign work and shepherd that work through to completion? Consider the following scenario:
Manager: "[Employee], I need you to do X. I need this done because of Y. I'd really like to have this work completed by Z. Do you have any questions? Was this clear?"
Employee: "Got it."
Manager: "Great. Please let me know if you need any additional help."
This seems simple. Employees like to understand what work is expected of them, why the work is important, and when the work should be completed. Once the assignment is given, managers can use a variety of actions to stay on top of progress, including daily check-ins, one-on-one meetings and regular staff meetings. This example is deceptively easy; yet, in the frantic pace of business, this type of clear, concise, two-way communication often is lost.

5. Teach and mentor.
In the event that assignments require additional help or instruction, does the top performer embrace the idea of teaching and mentoring? Does he or she have the patience to answer employees' questions respectfully, in detail, more than once? Managers who return employee questions with an impatient or arrogant tone will eventually find they have fewer questions to answer, as employees will be more reluctant to expose their weaknesses or challenge ideas.
Managers who answer employee questions in an unassuming, non-condescending manner will be able to foster and sustain open communication channels that are vital for employee development and team productivity.

6. Admit mistakes.
Does the employee know how to apologize or acknowledge a mistake? For example, a new manager arrogantly corrects an employee in a cross-functional meeting and subsequently learns the employee's assertion was accurate. Does the manager have the self-awareness and willingness to admit the mistake not only to the employee but also to the other meeting participants? This is necessary to help restore cross-functional trust in the employee who the manager publicly and erroneously contradicted. These corrective steps will be appreciated by most employees. On the other hand, if the manager doesn't take these steps, he or she will quickly lose the team's respect.

7. Leverage others' strengths.
Is the employee threatened by colleagues who have greater subject matter expertise? For a newly promoted manager, there is an increased likelihood that certain employees will know more about a specific domain. For example, a new vice president of brand marketing may be asked to manage the product marketing group, as well. Is this vice president willing to roll up his or her sleeves and learn about that group on a tactical level?
Rather than hide from knowledge they don't have, the best managers ask the right questions to understand their employees' day-to-day responsibilities. By doing so, effective managers can engage subject matter experts to provide a well-articulated recommendation and then implement, adjust or reject that proposal based upon their sense of how it fits into the broader company strategy.

8. Manage each employee.
Can the new manager alter his or her managerial approach by direct report? Does the prospective manager have a one-size-fits-all management style, or does he or she recognize that individuals may need to be managed differently? Employees with young children are likely to request time to attend school events or unexpectedly miss work due to a child's illness.
Younger, single employees may be hungry to prove themselves by offering to own too much work. Can the potential manager recognize the employees' motivational differences and alter his or her managerial style accordingly? The best managers hold everyone on the team accountable for expected behaviors and results, while also understanding and capitalizing on the individual motivations of each team member.

9. Take time to manage.
Has the company given the new manager the time needed to actually manage? If a top performer has moved from individual contributor to managing a group of five or seven people, for example, there is undoubtedly a need to scale back on tactical, role-based activities to find the pulse of his or her new team.
A managerial role requires building a rapport, delegating responsibilities and architecting a team's broader long-term strategy. When promoted, many top performers will initially carve out more work time per day to ambitiously try to handle their legacy tasks and their newly acquired role. This early push is not sustainable. The new manager, and the company, will need to understand and be receptive to the fact that his or her individual responsibilities should now account for no more than 50 percent of work time, and likely much less.
Each of these nine components of effective management requires organization commitment and an adjustment period in order to achieve a smooth transition, best fit and continued productivity for new managers and their employees. However, there often is more accountability for the organization regarding this ninth and final point.
Are top performers expected to manage effectively and maintain their previous workloads? Or are they given the time they need to manage their new direct reports? Providing employees with a manager's title without supplying enough time for them to actually manage is a fruitless exercise.

The Case for Careful Selection
There are potential consequences of not incorporating these nine dimensions into the managerial selection process. Ineffective managers can alienate other departments, or worse, their employees, which can lead to significantly reduced group productivity and increased attrition. As merit budgets tighten and companies try to do more with less, the cascading effects of a toxic manager pose an even greater threat to organizational success.
Top-performing individuals don't necessarily become top-performing managers. To succeed, new managers require time, training and guidance. Management consultants may never reach full agreement on the components of effective management, but these nine core skills comprise a practical evaluation of a top performer's readiness to manage and a company's readiness to prepare employees for this next step.

Please share your views, comment on this...

Jitendra Singh

Monday, July 4, 2011

Using C# Yield for Readability and Performance

I must have read about "yield" a dozen times. Only recently have I began to understand what it does, and the real power that comes along with it. I’m going to show you some examples of where it can make your code more readable, and potentially more efficient.

To give you a very quick overview of how the yield functionality works, I first want to show you an example without it. The following code is simple, yet it’s a common pattern in the latest project I’m working on.

1.  IList<string> FindBobs(IEnumerable<string> names)  

2.  {  

3.      var bobs = new List<string>();  

4.    

5.      foreach(var currName in names)  

6.      {  

7.          if(currName == "Bob")  

8.              bobs.Add(currName);  

9.      }  

10.   

11.     return bobs;  

12. }  

Notice that I take in an IEnumerable<string>, and return an IList<string>. My general rule of thumb has been to be as lenient as possible with my input, and as strict as possible with my output. For the input, it clearly makes sense to use IEnumerable if you’re just going to be looping through it with a foreach. For the output, I try to use an interface so that the implementation can be changed. However, I chose to return the list because the caller may be able to take advantage of the fact that I already went through the work of making it a list.

The problem is, my design isn’t chainable, and it’s creating lists all over the place. In reality, this probably doesn’t add up to much, but it’s there nonetheless.

Now, let’s take a look at the "yield" way of doing it, and then I’ll explain how and why it works:

1.  IEnumerable<string> FindBobs(IEnumerable<string> names)  

2.  {  

3.      foreach(var currName in names)  

4.      {  

5.          if(currName == "Bob")  

6.              yield return currName;  

7.      }  

8.  }  

In this version, we have changed the return type to IEnumerable<string>, and we’re using "yield return". Notice that I’m no longer creating a list. What’s happening is a little confusing, but I promise it’s actually incredibly simple once you understand it.

When you use the "yield return" keyphrase, .NET is wiring up a whole bunch of plumbing code for you, but for now you can pretend it’s magic. When you start to loop in the calling code (not listed here), this function actually gets called over and over again, but each time it resumes execution where it left off.

Typical Implementation

Yield Implementation

  1. Caller calls function
  2. Function executes and returns list
  3. Caller uses list
  1. Caller calls function
  2. Caller requests item
  3. Next item returned
  4. Goto step #2

Although the execution of the yield implementation is a little more complicated, what we end up with is an implementation that "pulls" items one at a time instead of having to build an entire list before returning to the client.

In regards to the syntax, I personally think the yield syntax is simpler, and does a better job conveying what the method is actually doing. Even the fact that I’m returning IEnumerable tells the caller that its only concern should be that it can "foreach" over the return data. The caller can now make their own decisionif they want to put it in a list, possibly at the expense of performance.

In the simple example I provided, you might not see much of an advantage. However, you’ll avoid unnecessary work when the caller can "short-circuit" or cancel looping through all of the items that the function will provide. When you start chaining methods using this technique together, this becomes more likely, and the amount of work saved can possibly multiply.

One of my first reservations with using yield was that there is a potential performance implication. Since c# is keeping track of what is going on in what is essentially a state machine, there is a bit of overhead. Unfortunately, I can’t find any information that demonstrates the performance impact. I do think that the potential advantages I mentioned should outweigh the overhead concerns.

Conclusion

Yield can make your code more efficient and more readable. It’s been around since .NET 2.0, so there’s not much reason to avoid understanding and using it.

Have you been using yield in interesting ways? Have you ever been bitten by using it? Leave a comment and let me know!

 

Monday, March 7, 2011

Difference between List, ObservableCollection and INotifyPropertyChanged

Introduction
This article describes the basic understanding between List, ObservableCollection and INotifyPropertyChanged.
Difference between List<T>, ObservableCollection<T> and INotifyPropertyChanged
List<T>
It represents a strongly typed list of objects that can be accessed by index. It provides methods to search, sort, and manipulate lists. The List<T> class is the generic equivalent of the ArrayList class. It implements the IList<T> generic interface using an array whose size is dynamically increased as required.
http://www.codeproject.com/KB/silverlight/SLListVsOCollections/List.JPG
Drawbacks
In ASP.NET, we simply use DataSource and DataBind() to bind the data, but in Silverlight it is slightly different. Databinding in ASP.NET is done in a stateless way - once that binding operation is completed, it's a done deal and if you want to change anything, you have to manipulate the underlying controls that were created as a result of the data binding, or else change the underlying data objects and call DataBind() again. That’s what we are used to – but it’s not a good practice.
listgrid.JPG
In the sample application, the values in the list are added, removed and changed during runtime in the code behind. The changes in the list will not be updated to the UI (Datagrid).
ObservableCollection<T>
ObservableCollection is a generic dynamic data collection that provides notifications (using an interface "INotifyCollectionChanged") when items get added, removed, or when the whole collection is refreshed.
Note: WCF service proxy class in Silverlight will use this type of collection by default.
observablecollection.JPG
Drawbacks
It does not provide any notifications when any property in the collection is changed.
observablecollectiongrid.JPG
In the sample application, the values in the observable collection are added, removed and changed during runtime in the code behind. The operations (adding and removing an item) in the observable collection will be updated to the UI (Datagrid). But any change in the existing item will not be updated to the UI.
INotifyPropertyChanged
INotifyPropertyChanged is not a collection, it’s an interface used in the data object classes to provide PropertyChanged notification to clients when any property value gets changed. This will allow you to raise PropertyChanged event whenever the state of the object changes (Added, Removed, and Modified) to the point where you want to notify the underlying collection or container that the state has changed.
inotifypropertychanged.JPG
inotifypropertychangedgrid.JPG
INotifyPropertyChanged is compatible on all type of collections like List<T>, ObservableCollection<T>, etc. The code snippet which uses INotifyPropertyChanged is shown below:
public class UserNPC:INotifyPropertyChanged
{
    private string name;
    public string Name {
        get { return name; }
        set { name = value; onPropertyChanged(this, "Name"); }
    }
    public int grade;
    public int Grade {
        get { return grade; }
        set { grade = value; onPropertyChanged(this, "Grade"); }
    }

    // Declare the PropertyChanged event
    public event PropertyChangedEventHandler PropertyChanged;

    // OnPropertyChanged will raise the PropertyChanged event passing the
    // source property that is being updated.
    private void onPropertyChanged(object sender, string propertyName)
    {
        if (this.PropertyChanged != null)
        {
            PropertyChanged(sender, new PropertyChangedEventArgs(propertyName));
        }
    }
}
In the above code snippet, whenever a value is set to a property, the method “onPropertyChanged” will be called which in turn raises the PropertyChanged event.

Wednesday, September 29, 2010

Visual Studio 2010 bridges the gap between developers and testers - Technology news

Visual Studio 2010 bridges the gap between developers and testers

Bangalore: One of the most commonly heard phrases between testers and developers in the development and testing cycle is "Well it worked on my machine," and "I can't reproduce the bug that you filed". It becomes a frustrated situation for the two genre of roles i.e. developers and testers during application development. However, this gap cannot be afforded during a software development lifecycle, which already reels in a certain amount of uncertainty blended with an inherent risk factor. Application Lifecycle Management (ALM) tools have come into the purview to bridge this long drawn gap. For Microsoft who stepped into the realm with Visual Studio 2010 (VS2010), it's about helping to form integrated teams, where the flow of information from team member to team member is totally streamlined, thus decreasing the level of conflicts within the cycle.




In a conversation with SiliconIndia, Jason Zander, Corporate Vice President, Visual Studio; Amit Chatterjee, Managing Director of Microsoft India Development Centre and Brian Harry, Technical Fellow at Visual Studio explained how VS2010 tries to bridge the gap between developers and testers and how the various products in the ALM tool brings better collaboration between the two.

Diminishing conflicts via Lab Management
Today 70 percent of the testing done in the industry is done manually. This segment was not served with tools until now. So the testers had the case written in a paper or word document and they read it and drive the product along the test lines. Testers could report the bug as a defect but not fix it. So while noting down the details of the bug if they missed reporting any information, the reporting created confusion among the developers and testers. This aspect is now made easy by the right tools in place. "So, we've tweaked the ALM solutions via integrating modern tools for testers to ensure an integrated environment. Microsoft's Visual Studio 2010 puts a halt to this 'bug ping-pong'. In fact the Lab Management solution in VS2010 extends the existing Visual Studio Application Lifecycle Management platform to enable integrated Hyper-V based virtual machine management. Lab Management automates complex build-deploy-test workflows to optimize the build process, decrease risk and accelerate your time to market," says Zander.

Lab Management helps to reduce costs associated with setup, tear down and restoration of complex virtual environments to a known state for build automation, test execution and build deployment. This eliminates waste across the entire application lifecycle by allowing development and QA to work together to effectively optimize the build process and minimize regression testing efforts. Lab Management also enables customers to easily file 'rich actionable bugs' with links to environment snapshots that developers can use to recreate the tester's environments and identify issues. It enables the automation of build-deploy-test workflows to reduce the overall risk and accelerates time to market. Above all, Lab Management helps to streamline the collaboration between development, Quality Assurance and operations to help organizations achieve higher ROI.

Simplifying App Development
Organizations are constantly looking to address business needs with applications that are flexible and scalable enough to match those needs as they change; but the time and resources to build those applications are not always available. "Visual Studio LightSwitch helps developers to quickly and affordably build applications that integrate with their data systems and Web services, work with a variety of hosting and deployment options, and work with third-party plug-ins," says Chatterjee.

LightSwitch is intended for anyone who needs to quickly and affordably create business applications. It is also an ideal tool for professional developers who need to build great-looking customer applications and want to kick start the development with a business application based on the LightSwitch templates.

Building Applications On and For Cloud
There have been large investments done in this area. We've offered Azure, Sequalising the data and data access. We've shipped 6 versions of VS based tools for Azure. There has been a major release of Visual Studio in every 2-3 years and we also keep producing updates against the existing versions of the tools. VS2010 comes directly with the latest version of Azure tools which released in Apr 2010. Also with certain latest components like the Visual Studio LightSwitch which is the simplest way to build business applications for the desktop and the cloud.




Migration from Visual SourceSafe (VSS) to TFS
VSS is a source control software package oriented towards small software development projects. The next generation version of VSS called Team Foundation Server (TFS) offers source control, data collection, reporting, and project tracking, and is intended for collaborative software development projects. "VSS is a very popular product we have. The answer to why one should move from VSS to TFS should be understood by the time these are built in. VSS was designed and built in early 1990's. It was an innovation at that time because when it was created, the state of the art was RCS, SECS command lines and PVCS, so VSS brought more control to the masses. It was simple and easy to approach version of control system, so it became popular," says Harry.

"And as the time went by, the state of software development has evolved - we have Azure, unit testing, large teams of software developers, etc. Now it is much about the collaboration among the team. This is solved by TFS. It gives a much bigger look at the software development process. So over the time we expect that as people's software development process matures, the vast majority of people will move from VSS to TFS. We've made a lot of help available for the move in the form of whitepapers. We've designed TFS similar to VSS so that the users are familiar with the path, command lines, explorer window, etc. We've also given migration tools to convert the VSS data to TFS. So it's a natural evolution from a focused version to an overall ALM tool ," he added.

Confirming the fact that TFS does support multiple platforms, he explained, "There is a feature called Team Explorer which allows you to access TFS from within development environments on Mac, UNIX and LINUX. It has Eclipse Plugin which is a great solution for the jobs of the other developers on the eclipse. So it solves a whole bunch of people's problems. There is a web interface called Team Web Access, intended for nondevelopers like project managers, analysts, etc., who need access to development information. In this you are not needed to install VS. You can just browse the web and review reports, projects stats, etc. And it works on any OS - IE, Chrome and others."

Enhancing the User Experience
Microsoft Visual Studio 2010 delivers a modern, enhanced user experience that makes understanding the current context more natural. It enables the users with:

* Clear UI Organization.
* Reduced clutter and complexity.
* Improved editor.
* Better support for floating documents and windows.
* Enhanced document targeting.
* Focused animations for action feedback.

Democratizing the Application Lifecycle Management
Microsoft introduced its ALM tool at a time when there were only expensive, complicated and disjoint tools in the market. Realizing there is the need, Microsoft comes up with a single tool which is inexpensive and integrates all the different roles to work together in a collaborative environment. We have integrated the testing feature along with development and the similar impact has been created on the architecture side to remove the bottlenecks and integrate the efforts of multiple people across the application lifecycle to work together.

Visual Studio Team System 2010 delivers new capabilities for everyone on a project, including architects, developers, project managers and testers. VSTS 2010 helps to:

* Discover existing code assets with the new Architecture Explorer.
* Design and share multiple diagram types, including use case, activity and sequence diagrams.



* Tooling for better documentation of test scenarios and more thorough collection of test data.
* Run tests impacted by a code change with the new Test Impact View.
* Gated check-in, branch visualization and build workflow allow for enhanced version control.

As Windows Phone 7 is expected to hit the market soon, VS2010 will further mount its popularity as it supports application development for Windows Phone 7.

Friday, September 24, 2010

How to search all columns of all tables in a database for a keyword

Create this procedure in the required database and here is how you run it:

--To search all columns of all tables in Pubs database for the keyword "Computer"
EXEC SearchAllTables 'Computer'
GO

ZAGG invisibleSHIELD for Samsung Galaxy S i9000 (Screen)  Samsung I9000 Galaxy S 8GB Unlocked Cell Phone with Camera, Wi-Fi, Bluetooth--International Version with 1 Year Warranty (Black)  Samsung I9000 8 GB Galaxy S Unlocked GSM Smartphone with 5 MP Camera, Android OS, Touchscreen, Wi-Fi, GPS and MicroSD Slot--International Version with No U.S. Warranty (Black)  Samsung 7-Inch UMPC with Touchscreen  eLocity A7 Touchscreen 7-Inch Android 2.2 Tablet (Black)
Here is the complete stored procedure code:

CREATE PROC SearchAllTables
(
 @SearchStr nvarchar(100)
)
AS
BEGIN 

 CREATE TABLE #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))

 SET NOCOUNT ON

 DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110)
 SET  @TableName = ''
 SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')

 WHILE @TableName IS NOT NULL
 BEGIN
  SET @ColumnName = ''
  SET @TableName = 
  (
   SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
   FROM  INFORMATION_SCHEMA.TABLES
   WHERE   TABLE_TYPE = 'BASE TABLE'
    AND QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME) > @TableName
    AND OBJECTPROPERTY(
      OBJECT_ID(
       QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME)
        ), 'IsMSShipped'
             ) = 0
  )

  WHILE (@TableName IS NOT NULL) AND (@ColumnName IS NOT NULL)
  BEGIN
   SET @ColumnName =
   (
    SELECT MIN(QUOTENAME(COLUMN_NAME))
    FROM  INFORMATION_SCHEMA.COLUMNS
    WHERE   TABLE_SCHEMA = PARSENAME(@TableName, 2)
     AND TABLE_NAME = PARSENAME(@TableName, 1)
     AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar')
     AND QUOTENAME(COLUMN_NAME) > @ColumnName
   )
 
   IF @ColumnName IS NOT NULL
   BEGIN
    INSERT INTO #Results
    EXEC
    (
     'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630) 
     FROM ' + @TableName + ' (NOLOCK) ' +
     ' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
    )
   END
  END 
 END

 SELECT ColumnName, ColumnValue FROM #Results
END

Friday, July 23, 2010

Thursday, June 3, 2010

implement paging in SQL Server stored procedure

CREATE PROCEDURE [dbo].[GetMemberList]
@startIndex INT = 1,
@maxRecords INT = 10
AS
BEGIN TRY
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON;

DECLARE @start_id INT, @total_rec INT;

SET ROWCOUNT @startIndex;

SELECT @start_id = Id, @total_rec = COUNT(Id) OVER() FROM Members
ORDER BY Id;

SET ROWCOUNT @maxRecords;

SELECT mem.*, @total_rec AS TotalRecords FROM Members mem
WHERE mem.Id <= @start_id
ORDER BY mem.Id;

SET ROWCOUNT 0;

END TRY
BEGIN CATCH
EXEC [IErrorLog];
END CATCH