Skip to content

Recent Articles

29
Nov

Movember comes to a close

Hello everyone,

Thanks again to all those who contributed to this fantastic cause, and supported my moustache growth over the past 29 days. So far I have raised $540. The end of Movember’s horrible moustache season is only two days away (31 hours and 10 minutes by my wife’s count), and, for those who haven’t had a chance to contribute, your opportunity will have passed. If you have not made it to my Movember website to track my progress throughout this month and make a donation, please do so before it’s too late – click this link to find out how to donate: http://mobro.co/jonhartney/. You can also find out more about the charities that Movember supports.

Most impressive mostache ever!

Ok, my results are far less impressive than the acclaimed Wyatt Earp moustache, but I did put in a valiant effort. Thanks again for your support in this effort.

Donate and see the moustache progress until end of day November 30th : http://mobro.co/jonhartney/

25
Aug

Effective Knowledge Management for Federated Support Teams

Is it even possible?

                There are a number of complexities that come from running a large IT organization within any company.  Not the least of these is the complex architecture that can exist to support software, applications, hardware, and other IT-specific technologies.  Each technology can create a silo of support which is specialized and used to support a niche part of this complex web, but this is entirely seamless to the customers in the organization; that is, the consumers of IT who rely on it for their day to day work activities.  When an end-user calls the “helpdesk” and, heaven-forbid, pushes the wrong menu option how does one group of specialists connect the customer with the correct group of specialists?  Organizations attempt to create large structures of knowledge (knowledge bases) which can be used to aid in the switch-boarding of callers to the correct base support group or onto the correct “2nd level support” group.  Over time the knowledge bases which are used deteriorate due to improper maintenance and force the knowledge back into the heads of individual support personnel which is what the organization was trying to avoid in the first place.  What follows are my meanderings on the best practices that should be followed to keep the knowledge base alive and effective.

                For 8 months, I supported a large portfolio of applications, and as I progressed from month 3 to 4 to 5 and so on, the workload continued to mount.  Beyond the regular duties of taking, logging, researching, transferring, and following up on service calls, there were a number of business processes which needed tuning in order to make the job more effective; to add to the pile of work which had already mounted, there were departmental projects that needed supporting also.  This is the typical life and workload of a support analyst. 

                About mid-way through month 5, there was a significant change in the support and knowledge infrastructure.  When call time resolution increases to upwards of 10 minutes on average because of changes in the support system and knowledge infrastructure, something needs to give.  Granted the system wasn’t perfect prior to organizational improvements, but at least the knowledge base was up to date.  The task to update the knowledge support system, test for quality, and ensure that it would work as part of new support processes, was an insurmountable task.  The changes in phone numbers and service departments, compounded with the existing department DIY data quality approach made the issue larger than it needed to be.  If there were enforced governance standards surrounding the knowledge base, they were not communicated widely; and when I asked about standard change processes, even seasoned employees were confused about what the formal processes were because of a flurry of changes partnered with the light communication about the subject.

                The standards around an IT organization’s knowledge base need to be rigorous and agile, and those accountable for the knowledge base need to have eyes and ears into every corner of the organization that is logged and supported in the knowledgebase.  Furthermore, there should be policy and process established and formally defined to communicate the processes through which knowledgebase changes should take place.  I do understand that there are safe-guards and data owners who are there to ensure that their knowledge is not lost or incorrect; however, if they are not part of (or informed explicitly) about large changes happening over the support organization they cannot do their job.  This all sounds good and like common sense; however, even large organizations that believe they have this in place, in theory, can have a program running which looks healthy based on metrics and reports.  When in reality at the ground level, the processes are falling apart because of bulky and ineffective governance processes.  Quality of support information will deteriorate over time, and the entire process for a single application support area could be one or two bad employees away from deteriorating into a state of irrelevance with information that is simply irrelevant and unusable.

                In a large organization this type of governance is extremely difficult and can be extremely tumultuous from a political stand-point.  Either one centralized Knowledge Manager or a team of federated Knowledge Managers should be dedicated to policing and enforcing the updating, maintaining and deleting of items from the knowledge base.  Dual roles and side roles are ok for those who are practitioners of the actual data in the knowledge base, but at least one dedicated role per IT organization should exist to ensure that knowledge is correct.

                So, how can one measure the correctness of the data in the database?  There are a number of metrics that I see, and I’m sure there are standardized metrics that exists, but I’ll start at the ground (support) level.  Identifying metrics could be ones such as: number of times calls are transferred by issue or asset; customer complaints or dissatisfaction levels, especially with regard to specialized applications where more detail is needed in the knowledge base for adequate handling; support complaints regarding find-ability of specific issues within the knowledge base; low levels of knowledgebase use; increased amount of documentation support teams store on local drives or shared drives.

                Beyond the reactive route of chasing metrics, there are structures of leadership and policy which IT leaders can put into place.  As alluded to earlier, there need to be in place formal lines of accountability.  This means starting at the top with a governance model supported by the appropriate people.  For instance, an IT Service Manager, an Engineering Manager, a Business Administration Manager, a Human Resources Manager, a Financial Manager, and other appropriate stakeholders would fit in well to this type of governance.  Let’s examine this for a minute.

                The biggest argument at this point is: Why drag in the managers of Engineering, Finanace, HR?  Isn’t this an IT-driven initiative?  My argument is this: these are the ones who manage the consumers of the IT service provided, so they are the ones who will have direct feedback from these “consumers” of IT services.  Why guess at the issues, or wait for them to bubble up and escalate to a point where irreparable damage is done to IT’s relationship with a particular organizational function?   

                At the top of an organization, this governance group can set out accountabilities for knowledge base changes, and they should be in positions high enough in the organization to be able to pinpoint large knowledgebase-impacting changes (such as new service software roll-outs).  Once accountabilities have been assigned, working groups or advisory groups can be formed or leveraged.  This group of workers and advisors, however, cannot be idle observers.  They should produce analysis from their area of expertise, complaints from IT support customers/clients, and other objective feedback that can be used to help produce acceptable measurement to describe the state of the knowledgebase as a whole.

                Outside consulting can also help.  Having an outsider analyze the knowledge available to a certain support area, or focusing 40-80 direct hours of clean up and renewal can be extremely useful and help to increase productivity and customer experience immediately.  Often support teams do not have the 40 continuous hours to focus on this type of work, and the (literal and informational) exhaustion from support routines can cloud the entries that are placed into a knowledge base.  Furthermore, there may be opportunities to flesh out the support language that is used on various applications and within different functional areas; in other words, mediation may be necessary between IT and different functional areas to either improve relations, or establish a baseline for support.

                In conclusion, a number of measures must be researched and considered proactively within an organization as those in charge of the application support processes look to sustain their contributed organizational equity.  While some costs are implied with placing enhanced governance around knowledge base infrastructure, the flip-side of the equation is the hacking and grinding of frontline knowledge workers attempting to solve end user issues with broken, inaccurate or out of date information, and the end-users, typically those who make or save the organization its revenue, are gnashing their teeth and seething over their inability to do their jobs or even get a clear answer as to why their application is not working and when it might again work.  The latter should (continue to) be the major driver behind pushing out efficient, measured, and enforceable governance of knowledge base systems in all (but more significantly large) organizations.

18
Aug

SharePoint and Team Management

Placeholder – pending editor’s review

8
Aug

PI Systems and their Role in Oil and Gas

placeholder – pending editor’s review

5
Aug

PI Information Article

Thanks to Mesalands College in New Mexico for publishing this PI summary article.  This is an excellent overview of the PI system and the tools used to access the data it collects.  The diagrams included in the poached article below are relevant, since they descibe the tools I use and trouble-shoot on a day-to-day basis.  Immediately below is a great comprehensive summary of (OSIsoft) PI System at a high-level.

OSIsoft PI System

  • PI stands for Process Information, which is an intentionally nebulous term designed to describe any and all information generated by any type of process system comprised of electrical or mechanical equipment. 
  • The amount of PI generated by an industrial facility can be enormous, and special software and systems are required to compile it into a useful form.  Several software companies have been successful in marketing PI management systems, including the notable example of OSIsoft, which found its beginnings in the data intensive and data driven petrochemical industry.   
  • Using OSIsoft PI systems, scientific research on the behaviors any type of machinery and equipment can efficiently conducted, as could energy efficiency studies and other critical industrial cost saving measures such as predictive maintenance.
  • In contrast to petrochemical and other large scale industrial facilities, early commercial wind turbines were relatively simple machines, often with vague modes of operation limited to conditions as simplistic as “Run” and “Faulted.”   Modern commercial wind turbines, however, may possess system complexities rivaling those of small factories, with literally thousands of sensors and operational conditions. 
  • With this added complexity, it becomes possible to form a sophisticated picture of the detailed inner workings of the machine during operation and when faulted, provided that the data is collected and made accessible. 
  • SCADA systems (Supervisory Control And Data Acquisition) have existed in the wind industry for over two decades in one form or another, often employing proprietary technologies and custom built equipment and software.  These deployments often interface via multiple communication standards, delivering various formats of data which must then be translated into useful forms by specialized software or human beings. 
  • The shortcomings of these pre-existing data collection and control systems are becoming more and more evident as the maturing wind industry discovers the enormous value of comprehensive data collection in diagnosing existing faults and preventing costly new ones.  As a result, PI management software is gaining acceptance in the wind industry and is providing attractive features to engineering and management figures alike.
  • In simplistic terms, the PI system is usually PC based, and interfaces with industrial equipment that is capable of gathering sensor data and status messages from automated “smart” machines. 
  • The inherent customizability of most PI systems is one of their biggest assets, as data points (known as “tags”) can be generated by customers to represent just about any kind of available process information.  Tags may be written that account for individual temperature, pressure, and vibration sensors, or entire arrays of sensors.  Tag databases are expandable and upgradable, with data points existing in numbers of hundreds, thousands, or even hundreds of thousands, capable of monitoring literally millions of inputs simultaneously and in real-time.
  • Aside from tags that draw from raw sensor data, tags can also be created that utilize status conditions calculated internally by the industrial field equipment being interfaced with.  In a commercial wind turbine, for example, an excessively high electrical current reading may not be of concern unless it occurs in concert with another high reading elsewhere in the machine.  Such a combination of events would trigger a special fault condition in the turbine, and a customized PI tag could be written that accounted for only that event, as opposed to the data provided by the individual current sensors.
  • Once tags are written, the resultant collected data can be archived for as long as the customer wishes to maintain it.  This data is archived in a unified data point standard that enables corporations to share process data with any position within the company (provided they have the proper clearance levels). 
  • The same data accessed by field technicians can be drawn upon by executive corporate management, and every position in between.  One way this data becomes so universally accessible is through the ability of most PI systems to export process information to popular and highly standardized interactive data analysis software like Microsoft ProcessBook and Excel, and speak standardized programming languages like Structured Query Language and Visual Basic.  These abilities alleviate the need to purchase expensive proprietary software suites and retrain employees on how to use them.  Additionally, in this age of global computer networking, it is naturally implied that any PI system is capable of being accessed from any point on the planet via the internet, again, provided that proper clearance levels are granted.
  • PI Software suites like those produced by OSIsoft make it possible to build incredibly comprehensive and highly accessible data management systems that are tailor-made to the specialized demands of the wind industry.   Such process information management is already having positive impacts in commercial wind, as engineers utilize the wealth of new data to improve the reliability of failure-prone and expensive components like gearboxes, and the energy gathering abilities of wind turbine blades.
  • The NAWRTC PI System is capable of tracking 26,000 tags. The GE SCADA system monitors 590 elements, some are value, some are on/off, and some are state (return value correlates to state or condition such as shutdown or maintenance mode)

1

Typical network connection for OSIsoft PI System.

2

PI Process Book


3

PI System Process Explorer Showing Tags

4

Import into Excel

 
5

SQL Data can be accessed by Visual Basic