Technology races ahead with new methods and capabilities, but that doesn't mean that CIOs should forget some of the management strategies that have always worked.
A
host of new and reformed practices have IT departments reinventing
themselves: collaborative software development; rapid application
prototype development and placement in production; new project meeting
methodologies; the growth of BYOD, which encourages democratic device
usage in the field; and more. Nevertheless, the fundamental requirements
for quality systems that work right the first time are not going to go
away. The rudiments of IT asset protection, disaster recovery, and
business continuation also remain. Consequently, many tried and proven
“old school” IT practices still make venerable companion strategies for
emerging IT trends. Here are ten “old school” technology strategies that
CIOs should not forget:
1. Project management by walking around
IT
is a project-driven discipline. However, no matter how collaborative
and informational your project management software is, it can never
replace just walking around to see how staff is feeling about the
projects they are working on. Body language and face to face
communications will tell you much more about the health of a project
than any software can. The technique worked thirty years ago, and it
still works today.
2. Data retention and access meetings
A
myriad of rules can be built into automated systems that patrol for
security clearances to applications, or that automate the data backup
and purge operations. But none of this means anything in the context of
enterprise data governance if business units aren’t onboard with it.
Data retention meetings can be long and arduous, because everyone these
days is mindful of promulgating regulations. Understandably, users are
hesitant to get rid of data. They are also cautious about who gets
access to sensitive data within their own work groups. Discussions and
decisions about data retention and access are still best facilitated in
old fashioned, face to face meetings because of the complexity of issues
that can arise. A system portal with fill-in parameters for data
retention and security can never do the process justice.
3. Tape and slow, but cheap, hard disks for backup and archiving
We’ve
been hearing about the impending demise of tape backups for decades,
but tape is still here and companies are continuing to invest in it.
Slow, but cheap, hard disks also grace the field of data backup and
archiving as faster flash storage (and in-memory storage) occupy the
field for rapidly accessed data. It is doubtful that older disk and tape
storage will be replaced anytime soon in the province of data backup
and storage because of their dependability, economy—and the number of
backup systems and procedures that enterprises have built around them
through the years.
4. Life cycle “spend downs” of old servers
Workstations
of power users can be redeployed as they age to average or light users,
and aging “workhorse” servers in IT production can be redeployed for
testing applications or even for use as network proxy servers. The
object is getting every ounce of capability out of IT assets. In the
“old days,” this meant “spending down” resources even after their
depreciation cycles were met. The practice still works.
5. Respect for the traditional software development life cycle.
It’s
not uncommon for some companies today to design applications on the
fly, briefly test-drive them, and drop them into production. In these
cases, users and IT know that apps won’t work perfectly—but they concede
that it’s better to be fast and agile than to drag out software
development and deployment. Especially in Web app environments, this can
work to competitive advantage. However, for mission-critical
applications that must work right every time and also comply with
industry regulations and security standards, software has to be of very
high quality. Accordingly, it is important to cycle this software
through requirements definition, application design and development,
quality assurance and deployment to production. These steps are codified
in traditional software development methodologies that have been in
place for over thirty years. With so much at stake, the checkpoints for
quality that are inherent in these traditional methodologies shouldn’t
be overlooked.
6. Application stress testing
The U.S. health.gov website is the latest example of
a software application that didn’t work because it was never adequately
stress-tested. When you are under the gun, it’s easy to skip important
steps in the quality assurance process, such as ensuring that your
application can handle the maximum number of users or transactions that
could potentially ever arrive at one time. Today as in the past, there
are proven and automated test tools that simulate maximum application
stress loads. This QA checkout point should never be skipped.
7. Change management and version control
Documenting changes to systems and applications and
ensuring that software on distributed workstations and mobile devices
is synchronized to latest release levels continue to be weak spots in
IT—despite the fact that change management and version tracking software
has been around for years. The problem is not lack of tools to do the
job, but loose enforcement of IT practices and policies that ensure that
change management and version control are always done. As part of the
process, application developers should be given guidelines on how they
should document the software they develop—and documentation review
should be part of QA checkout. Often, software documentation is skipped
in the effort to get software out quickly. This places a great burden on
the software maintenance staff, which now must deal with software that
is almost “black box” in nature at the same time that they are trying to
troubleshoot a production problem.
8. IT asset and mobile device tracking
There
is software on the market that tracks IT assets and also “moving”
assets such as mobile devices in the field. What’s more challenging for
IT is crafting user policies for usage of company-owned devices, such as
what (if anything) should be stored on these devices, whether (and who)
makes upgrades to devices, and who may use the devices. Ten years ago,
it was relatively straightforward to enact these policies—but with BYOD
(bring your own device) and changing attitudes about personal use of
devices, IT needs to revisit (or in some cases, enact) policies that
will meet corporate security and regulatory standards. Older policy
statements can be helpful in new policy formulation.
9. In-person system and application walk-throughs
Despite
breakthroughs with collaboration software, nothing is better for a
technical design review of a complex system or application than to get
every expert in the room and to go through a detailed walk-through of
the system. When you have the DBA, the network specialist, the
application developer, the business analyst, and the system specialist
all collected together in a “live” and interactive environment, it’s
still the best way to flush out hidden problems in technical design that
couldn’t be seen in individual or virtual design reviews.
10. Manual procedures
Ten
years ago, banks still had “old hands” on board who remembered how to
use a paper ledger to record bank transactions when the core banking
system went down. This need hasn’t changed. Although we now have many
automated failover systems and methodologies, organizations are also
more dependent on IT than they were a decade ago. The likelihood of a
major Internet or technology outage increases in this context. This is
why individual business units including IT, should be encouraged to
maintain a set of manual procedures for operation. Hopefully, these
guidelines will just gather dust in desk drawers—but if you ever have a
total outage, you will appreciated how valuable it is to have “old
fashioned” methods of doing business—and employees who are trained to
operate “by hand” if they have to.
0 comments:
Post a Comment
Appreciate your concern ...