Archive for category Salesforce.com

Don’t let your Salesforce.com Implementation Program lose Momentum

Salesforce Implementations can get seriously bogged down after a year or two of haphazard evolution

Salesforce Implementations can get seriously bogged down after a year or two of haphazard evolution

Is progress with your Salesforce implementation program becoming increasingly challenged? 

Symptoms include:

  • Project delivery late and over-budget
  • User adoption decreasing
  • Data quality decreasing
  • Minor changes taking too long to move from idea capture to delivery
  • Stakeholders preoccupation with stabilising the mess rather than taking strategic next steps
  • Salesforce staff failing to be retained

Whilst these symptoms are all too common with larger Salesforce implementations made complex due to the number of business units and system integrations, the good news is there are steps you can take to stay (or get back) on-track.

Make these practical decisions at the start a Salesforce implementation program to ensure you don’t get “stuck in the mud” or perhaps “Stuck in the cloud” should be the modern phrase!

  1. Establish a “Centre of Excellence” to govern change
    – centralise decision making, release management and design standards;
  2. Establish strong product management involving business and IT to assess and prioritise requests for change
    – ensuring high priority/value requests are delivered first;
  3. Stay ahead of the curve with strategy and architecture
    – communicate a clear (regularly refreshed) vision explaining where you are heading and how you will get there;
  4. Use a consistent delivery team
    – yield better results with an agile team working through a regularly re-prioritised backlog to avoid loss of staff continuity across a series of larger stop/start projects;
  5. Deliver releases regularly (continuous delivery)
    – be responsive to change requests (at least for minor enhancements) to keep users engaged as they receive increasing value as the platform is advanced;
  6. Keep documentation current
    – capture the reasons why decisions were made and what outcomes were achieved to inform the future;
  7. Develop a library of test classes/methods which simulate and test critical business functions
    – ensure nothing breaks as changes are deployed using automatically run test methods;
  8. Continuously focus on data quality at the point of entry
    – be clear for all data about why it is needed, how it will be used, and what defines “good data”;
  9. Make an ongoing investment to resolve legacy implementations which are causing problems
    – avoid an increasing pile of “technical debt” which will eventually inhibit progress;
  10. Learn as you go and invest in training
    – improve delivery over time by conducting post-implementation reviews;

If you need help with Salesforce please get in touch with Artisan Consulting.  Artisan provides a cost-effective Salesforce Program Health Check which documents your current state, where you want to get to, and provides practical recommendations for your next steps.

About the Author
Richard Clarke is a Program Director and Technical Architect within Artisan Consulting's Salesforce Delivery Team.  Richard has led Salesforce delivery teams in the Australia, New Zealand and the USA and applies over 20 years of enterprise software experience when delivering business value with Salesforce.com.

No Comments

Salesforce (Data) Relationships Matter

The ability to maintain data quality is dependent on establishing the right relationships between objects and configuring appropriate controls over data entry.  Too often this only gets the focus it deserves a year or two after Salesforce is implemented when data quality is poor and causing problems with reporting, integration or workflows.

Salesforce Data RelationshipsThere are two types of relationships available to connect objects together (master-detail and lookup) each of which has further options.

Chose the relationship type which best represents the real-world relationship between the business concepts the objects mirror then apply the tightest options possible to control over data quality.

Don’t add relationships to objects until you fully understand the choices available!

Master-detail (parent-child) relationships

This is always a required relationship as detail records don’t have an owning user and access control is managed at the master level.  When a master record is all related detail records are deleted as well.  There can be no more than two master-detail relationship in an object and only one if the object is the master of another relationship.  Standard objects cannot be detail records and Leads/Users cannot be master records.

There is an option to allow users to change the parent (master record) of a detail record.

A master record can have no more than 10,000 detail records.

Lookup (cousin) relationships

This relationship is not automatically required and has no effect on record access.  If the relationship is set to be required a record can’t be deleted if other records are related to it.

If the relationship is not required you can choose whether to prevent a record being deleted if other records are related to it, or to allow deletion by automatically clearing any lookup relationships. 

Setting a lookup to be required has a significant impact when creating a partial copy sandbox which contains objects with more than 10,000 records.  The partial copy sampling process will retain relationships which are required hence the related record will be copied as well.  If the lookup is not required then values can be lost from lookup relationships during the sampling process.

Filtering relationships

Applying an active filter to a relationship means when the user searches for a related record only those which match the criteria will be presented.  An active filter which is required cannot be bypassed.  An active filter which is optional can be bypassed.

An object can have no more than 5 active filtered relationships.

Validating relationships

Applying validation rules to a relationship can be used to check (when the record is saved) that a related record meets specific criteria.  This can be helpful if you hit the limit of 5 active filters as the limits on validation rules are higher (20 per object in Professional Edition and below, and 100 in Enterprise Edition and above).  Validation rules are also more powerful and more flexible that lookup filter criteria.

Active filters provide a better user experience than validation rules as they limit the records able to be selected, rather than allowing records to be selected then blocking the save.

Hints to guide selecting the right relationship type

  • If a record cannot exist by itself and access is controlled by the parent use master-detail.
  • If a record cannot exist by itself but has its own access controls use a required lookup.
  •  Use active filters if it does not make sense to allow connection to any record.
  •  Use validation rules to limit which records be related if there are already 5 active filters.

References

Limits – http://resources.docs.salesforce.com/198/17/en-us/sfdc/pdf/salesforce_app_limits_cheatsheet.pdf

Cheat sheet – https://resources.docs.salesforce.com/200/latest/en-us/sfdc/pdf/salesforce_filtered_lookups_cheatsheet.pdf

Lookup Filters – https://help.salesforce.com/HTViewHelpDoc?id=fields_lookup_filters_notes.htm

 

No Comments

Ten ways to avoid common pitfalls when delivering Salesforce.com projects

How to make sure your Salesforce implementation does not end up looking like this!

1. Define a Clear Roadmap

  • Not knowing where you want the journey to end guarantees you won’t end in the right place.
  • When you want to arrive is almost as important and where you want to arrive.
  • The end result is usually bigger than “just” CRM (consider all the relationships which need management).
  • Be clear about which business capabilities need to be supported or enhanced with Salesforce.
  • Don’t transfer fractures in organisational structure into Salesforce – share common customer data.
  • Stay abreast of Salesforce’s own roadmap.

2. Define Clear Program and Project Requirements

  • Define strategy (the “why”) and required business outcomes (the “what”) before considering the “how”.
  • The “what” has to be measurable as this defines success.
  • Having key stakeholders including operational managers express and approve requirements is critical.
  • Don’t leave thinking about reporting/analytics until last.
  • Consider security upfront – both who can see what data and who can perform what functions.

3. Don’t do too Much at Once

  • Multiple projects at once are hard to coordinate especially when multiple delivery partners are utilised.
  • Doing two different things at once is three times harder than doing one thing at a time. 
  • Doing two different things fast at the same time is four times harder.
  • Running multiple overlapped software development projects will increase the need for tight governance (processes and tools) to manage requests for change, development, testing and deployment.

4. Choose the Right Delivery Model

  • Waterfall (all requirements up front), Agile (requirements clarified one short iteration at a time) or Fragile (requirements clarified through repeated experimentation).
  • Agile does not mean being continuously vague and discovering requirements by endlessly building the wrong thing then fixing it.
  • If you are not sure what you want or how to build it explore options using discardable prototypes.
  • Use an internal self-managed agile team if your organisation has adequate software delivery maturity and available resources.  Otherwise use an internal partner-managed agile team in preference to using an external remote partner delivering under a waterfall driven statement of work.
  • Consider what the team needs to look like when you are finished and the delivery partners leave.  Work towards that outcome throughout the project so those responsible for operational management have experience with the implementation details.

5. Manage Data Architecture and Data Quality

  • Data quality will degrade without proactive steps to maintain it.  Implement the platform’s tools to minimise duplicate customer data.  Use validation rules to enforce a base level of data quality.  Calculate a data quality score for key objects and use dashboards to drive behaviour.
  • Don’t add data fields to objects unless there is a strategy to populate them.
  • Make sure the relationships added between objects are done by someone capable select optimally between master-detail, required lookups and filtered lookups.
  • Consider reporting and query/report performance early.  Index text/picklist fields which are important for reporting by flagging them as an external ID.  Ask Salesforce to add custom indexes to custom date fields if they are important data selection filters.

6. Know what Success Looks Like

  • Develop a testing strategy and direct delivery partners as to what testing you require them to complete.
  • Testing has to address the outcomes desired as well as the outcomes to be avoided.
  • Recognise Salesforce’s 75% code coverage rule does not necessarily deliver automated tests which usefully determine if the system is working as it should.
  • Software engineers usually make poor testers so resource the testing function appropriately.

7. Maintain Current Documentation

  • Documentation needs to capture how the system works and how it integrates to other systems.
  • Projects should provide documentation which explains what changed (and why) as well as how to reverse a deployment which causes problems.
  • Keep information about system architecture current (what systems different user roles interact with and how systems integrate with each other).

8. Customise with Configuration Not Code

  • There are many ways to achieve the same thing in Salesforce and use a coded solution last.
  • Don’t custom code a user interface until you have tried the auto-generated user interface.

9. Choose the Right Operational Mode

  • The composition of the optimal operational team varies depending on the size and complexity of the implementation.  Include a developer in the team if they are expected to maintain custom code (vs custom configuration).
  • Budget for ongoing training and incentivise team members to achieve and maintain current certifications. 
  • Use Salesforce Trailhead for self-paced learning which is both fun and easily measurable.
  • Leverage operational assistance sourced from Salesforce Premier Support and Salesforce Partners.

10. Measure and Manage Levels of Adoption

  • Define what KPIs matter and measure them.
  • Ask executives to lead by example by being visibly active in Salesforce.
  • Use Salesforce’s Chatter, Ideas, Q&A, and Portal functionality to keep the conversation alive.

No Comments

Salesforce Testing Disasters

No Cheating

All too often I am engaged to help Salesforce customers who are struggling with significant quality issues with their custom code.  Here are some lessons learned from the most spectacular disasters I’ve had to remediate. 

If you are responsible to manage a Salesforce implementation read the full content of my post on LinkedIn to learn how developers can (and do) deliver deceptive indicators of code quality by:

  1. Cheating the 75% code coverage threshold
  2. Implementing test classes which test nothing (other than coverage)

See http://www.linkedin.com/pulse/salesforce-testing-disasters-richard-clarke 

What will you learn?

  1. Code coverage is not a useful measure of code quality
  2. Even 100% code coverage can be meaningless if the test code does no “testing”
  3. Code coverage can be cheated on by adding fake classes (and yes sadly I've seen this in production Salesforce instances)
  4. Test methods passing are meaningless if the test does no testing
  5. User acceptance testing via the UI is not enough if only the simple positive use case is tested
  6. Developers can be lazy or plain deceptive whilst giving the appearance of providing good code

Recommendations:

  1. Document test cases (acceptance criteria) up front
  2. Ensure test cases cover positive and negative scenarios
  3. Ensure test cases bulk data manipulation
  4. Direct the developer about what automated tests must be implemented
  5. Follow test driven development principles and create the test methods FIRST
  6. Ask for an independent expert review if you are struggling with code quality

Richard Clarke

Richard has been delivering complex integrated solutions on the Salesforce platform since 2007 and is the Melbourne based principal consultant for FuseIT Australia.
https://au.linkedin.com/in/richardaclarke
 

No Comments

Performance Tuning Tools for Salesforce are Comparatively Limited

The tools available to performance tune Salesforce are more limited because the platform operates a Software-as-a-Service multi-tenant architecture.

Consider these typical responses to address poor performance of a traditionally hosted web site where you own or have control over the servers and hosting infrastructure:

  1. Deploy more web servers to distribute the front end load with load balancing

  2. Federate or partition the backend database to distribute the data query load

  3. Increase the number of CPUs or cores in the servers

  4. Increase the amount of RAM in the servers

  5. Add more hard drives, or faster hard drives, or faster IO adapters

  6. Increase the internet connectivity bandwidth

With Salesforce you have none of those options.   None

Toolbox 178177044_d9697d3810_o 700x400

So to avoid your Salesforce instance performing poorly a different approach has to be taken.

I see an all-to-common pattern where complexity is added to Salesforce with gay abandon for the first year or two followed by external expertise being needed to address serious performance issues.

Here are my suggestions to avoid ending up in a performance pitfall with all too few tools in the chest to work with:

  1. Architect with performance in mind from beginning recognising Salesforce is a hosted platform with deliberately applied governor limits to throttle performance;

  2. Follow a Test-Driven-Design methodology where test classes are designed to ensure performance at realistic production loads (not just to achieve 75% of code coverage!);

  3. Develop and peer review custom code with performance optimisation in mind to make sure there are no obvious performance flaws like SOQL queries in loops or inefficient use of collections;

  4. Add no more than a single trigger per object entity;

  5. Use custom External ID fields where appropriate as these are automatically indexed;

  6. Request Salesforce to add 1-column or 2-column custom indexes;

  7. Minimise the amount of data stored in view state when writing custom Visualforce pages (especially mobile pages);

  8. Avoid data skew where one user owns too many records or one parent record has too many children;

  9. Operate the most open data security model permissible as sharing rules add complexity and load;

  10. Integrate using the Bulk API where possible;

In summary, performance in Salesforce implementations which involve large volumes of data and significant customisation is a challenge, there are less tools to utilise, so performance must be considered early in design and continuously during development.

Other good resources include:

No Comments

Salesforce.com integrations can start well but end up being messy

All Salesforce implementations begin without any system integration.  Simple.  Clean.  Scalable.

Puppies - theory and practice - 800w

Increasingly though businesses want to integrate Salesforce with their digital assets (websites and mobile devices), internal legacy systems and external third party systems.

Salesforce provides a number of ways to achieve integration including:

  1. Web-to-lead and web-to-case HTML form submission

  2. Email handlers processing inbound emails

  3. Outbound emails sent from Salesforce or via third party products like Marketing Cloud

  4. Inbound synchronous calls to the Salesforce REST or SOAP APIs

  5. Inbound synchronous calls to custom Salesforce web service endpoints

  6. Outbound synchronous calls to external web service endpoints

  7. Outbound asynchronous calls using future methods to external web service endpoints

  8. Outbound asynchronous calls using outbound messages

When I saw this photograph I thought it was a great illustration of where I’ve seen Salesforce clients end up with their integration strategy.  When the project starts everything is orderly and under control.  Then over time development teams introduce different integration patterns which depart from the original architectural blueprint (if one existed at all).  Then the project gets deployed and integrations start firing in earnest.

Combining a myriad of different approaches to Salesforce integration with high volume transactional activity and a large active user base can create a messy outcome.  Salesforce currently lacks strong support for multi-threading control so synchronising data access and modification across a matrix of integration patterns can quickly become problematic.

My advice for projects which need to highly integrate Salesforce is to adopt early the best practice approaches for inbound and outbound integrations.  These can handle bi-directional data exchange in a standardised scalable manner.

Otherwise there will be a lot to clean up!

Credit to Alexander Ilyushin (@chronum) for the puppies photograph which originally illustrated similar outcomes with multi-threaded programming theory and practice:  https://twitter.com/chronum/status/540437976103550976/photo/1

No Comments