koiv's picture

Would you prefer to compare apples with apples (well, I do not mean 3G with 4G) or apples with carrots? I assume that being in a supermarket and looking for some healthy groceries instead of mixing them you would rather compare the same kind of fruits or vegetables. Interestingly, the same is true for enterprise architecture management tools! So looking for a way to automate EAM capability, companies are kicking off vendor selection process. Yet, thanks to the confusion on what EA is about and its role within enterprise, both clients and vendors have difficult times. Supply should match demand, however  a client should acquire up-front proper understanding of their demands and formally document them. On the other side a vendor struggling to understand the client demands, often revert to its comfort zone and simply pushes a standard, “one-fits-all” value proposition. The result of such miscommunication is pretty obvious: mismatch between demand and supply turning into a weaker, when at all, adoption of an acquired EAM tool.

In the course of past several years being in the EAM business I came across, helped to deliver or simply consulted my great colleagues worldwide on hundreds of RFIs and RFPs from companies looking to purchase an EA tool. They ranged from extremely simplistic but very focused to extremely short but very broad like “Does your tool support demand management?”. And certainly there were tons of copies of 500+ questions from EA Tool Selection Guide by IFEAD: these companies simply made a copy of the default sheet without catering it to the purpose and maturity of their architecture management practice. :-( Another extreme was a bunch of predefined 150+ questions in form of “Which application is in-house\bought?” or “How does my information model look like?” and an answer on every of them was split into 4 categories – storage, drill-down analysis, reporting and exporting. Sure, do not forget questionnaires from Gartner and Forrester leveraged by their clients; each with their own flavor and sometimes drastically changing from year to year. And the last but not least, we are involved in ongoing development of TOGAF Tool Certification criteria…

Furthermore vendors extend their tools and often consequently add functionalities specific to instruments from adjacent parts of a process or use-case. Let say a tool initially created for implementation phase of Software Development Lifecycle (SLDC) is extended to pretend addressing user needs in analysis and design phases of the process. So I faced quite often inquiries to compare ARIS Platform with tools developed and used for pretty different purposes. That increases the complexity of selection, right?

So how could the selection (including comparison of alternatives) process be improved? How would I compare EA tools, if I would be a client? And finally what makes up such a tool? Would a capability model be sufficient to answer the question?

That is why some time ago I started from a scratch and came up with the following taxonomy of the 1st and 2nd level capabilities:

Let me very briefly explain this taxonomy.

  1. Architecture Administration covers the capabilities to administrate architecture content:

    1. Storage administration – the capability to organize architecture content in the repository and its lifecycle management.
    2. Metamodel administration – the capability to tailor architecture metamodel(s) and exchange metamodel definitions.
    3. Access & identify management – the capability to manage users, user groups, user directories and their access to architecture content in the repository.
    4. Multi-language management – the capability to manage multiple languages used to store and view architecture content
    5. Automations management – the capability to create, execute and organize scripts automating functionalities of EAMS.
  2. Architecture Populating and Modeling covers the capabilities to enter information into architecture repository:
    1. Manual populating and modeling – the capability to enter information manually through catalogues, matrices, diagrams and forms.
    2. Automatic populating of structured content – the capability to either migrate or federate information from structured sources like XML or CMDB.
    3. Linkage with unstructured content – the capability to link architecture content in the repository with sources of unstructured content, like web pages or documents in Document Management Systems (DMS).
    4. Structuring by frameworks and standards – the capability to enter information and structure it according to some framework, either open (e.g. TOGAF) or proprietary (e.g. IAF) or combination of both.
  3. Architecture Analysis covers the capabilities to actively understand and analyze architecture content:
    1. Navigation & search – the capability to browse, search, query content in the repository
    2. Structural analysis – the capability to analyze the structure of architecture content in order to identify gaps, redundancies and impacts of architecture artefacts and their interrelations.
    3. Quantitative analysis – the capability to compare or aggregate quantitative properties of architecture content like cost or utilization.
    4. Qualitative analysis – the capability to compare or aggregate qualitative properties of architecture content like availability or criticality.
    5. Time-based analysis – the capability to analyze architecture content as a function of time.
  4. Architecture Communication covers the capabilities to spread the architecture content into the masses:
    1. Publishing – the capability to share the architecture content as it is by publishing it at company´s portal or simply shared folder.
    2. Reporting – the capability to leverage the architecture content for a certain report, constrained by its format and content and used e.g. by an architecture board to formally evaluate alternatives.
    3. Dashboarding – the capability to aggregate architecture content into simplified, live representations suitable for different user groups to monitor e.g. application portfolio health.
    4. Exporting – the capability to persist architecture content in a structured format, e.g. XML
    5. Visualization – the capability to create views on architecture content for the sake of its better communication to certain stakeholders.
  5. Architecture Governance covers the capabilities enforce formal governance processes of architecture content:
    1. Sign-off & release management – the capability to formally define and execute workflows to sign-off (approve) architecture content releases.
    2. Change management - the capability to formally defined and execute request for change workflow as well as trace changes in architecture content.
    3. Usage traceability – the capability to monitor actual usage of architecture content by the users and user groups.
    4. Quality assessment – the capability to ensure completeness, consistency and unambiguity of architecture content.

That makes up my list of capabilities every EAMS {must | should | may} provide in order to support daily work of EA practice, all relevant stakeholders and users. Did I miss something? Do you think you would expect some other capabilities? May be you can propose another way to structure them? Please, comment below or just drop me an email!

Tags: Enterprise Architecture EA