Archive for category Cloud Computing

Salesforce Testing Disasters

No Cheating

All too often I am engaged to help Salesforce customers who are struggling with significant quality issues with their custom code.  Here are some lessons learned from the most spectacular disasters I’ve had to remediate. 

If you are responsible to manage a Salesforce implementation read the full content of my post on LinkedIn to learn how developers can (and do) deliver deceptive indicators of code quality by:

  1. Cheating the 75% code coverage threshold
  2. Implementing test classes which test nothing (other than coverage)

See http://www.linkedin.com/pulse/salesforce-testing-disasters-richard-clarke 

What will you learn?

  1. Code coverage is not a useful measure of code quality
  2. Even 100% code coverage can be meaningless if the test code does no “testing”
  3. Code coverage can be cheated on by adding fake classes (and yes sadly I've seen this in production Salesforce instances)
  4. Test methods passing are meaningless if the test does no testing
  5. User acceptance testing via the UI is not enough if only the simple positive use case is tested
  6. Developers can be lazy or plain deceptive whilst giving the appearance of providing good code

Recommendations:

  1. Document test cases (acceptance criteria) up front
  2. Ensure test cases cover positive and negative scenarios
  3. Ensure test cases bulk data manipulation
  4. Direct the developer about what automated tests must be implemented
  5. Follow test driven development principles and create the test methods FIRST
  6. Ask for an independent expert review if you are struggling with code quality

Richard Clarke

Richard has been delivering complex integrated solutions on the Salesforce platform since 2007 and is the Melbourne based principal consultant for FuseIT Australia.
https://au.linkedin.com/in/richardaclarke
 

No Comments

Performance Tuning Tools for Salesforce are Comparatively Limited

The tools available to performance tune Salesforce are more limited because the platform operates a Software-as-a-Service multi-tenant architecture.

Consider these typical responses to address poor performance of a traditionally hosted web site where you own or have control over the servers and hosting infrastructure:

  1. Deploy more web servers to distribute the front end load with load balancing

  2. Federate or partition the backend database to distribute the data query load

  3. Increase the number of CPUs or cores in the servers

  4. Increase the amount of RAM in the servers

  5. Add more hard drives, or faster hard drives, or faster IO adapters

  6. Increase the internet connectivity bandwidth

With Salesforce you have none of those options.   None

Toolbox 178177044_d9697d3810_o 700x400

So to avoid your Salesforce instance performing poorly a different approach has to be taken.

I see an all-to-common pattern where complexity is added to Salesforce with gay abandon for the first year or two followed by external expertise being needed to address serious performance issues.

Here are my suggestions to avoid ending up in a performance pitfall with all too few tools in the chest to work with:

  1. Architect with performance in mind from beginning recognising Salesforce is a hosted platform with deliberately applied governor limits to throttle performance;

  2. Follow a Test-Driven-Design methodology where test classes are designed to ensure performance at realistic production loads (not just to achieve 75% of code coverage!);

  3. Develop and peer review custom code with performance optimisation in mind to make sure there are no obvious performance flaws like SOQL queries in loops or inefficient use of collections;

  4. Add no more than a single trigger per object entity;

  5. Use custom External ID fields where appropriate as these are automatically indexed;

  6. Request Salesforce to add 1-column or 2-column custom indexes;

  7. Minimise the amount of data stored in view state when writing custom Visualforce pages (especially mobile pages);

  8. Avoid data skew where one user owns too many records or one parent record has too many children;

  9. Operate the most open data security model permissible as sharing rules add complexity and load;

  10. Integrate using the Bulk API where possible;

In summary, performance in Salesforce implementations which involve large volumes of data and significant customisation is a challenge, there are less tools to utilise, so performance must be considered early in design and continuously during development.

Other good resources include:

No Comments

Salesforce.com integrations can start well but end up being messy

All Salesforce implementations begin without any system integration.  Simple.  Clean.  Scalable.

Puppies - theory and practice - 800w

Increasingly though businesses want to integrate Salesforce with their digital assets (websites and mobile devices), internal legacy systems and external third party systems.

Salesforce provides a number of ways to achieve integration including:

  1. Web-to-lead and web-to-case HTML form submission

  2. Email handlers processing inbound emails

  3. Outbound emails sent from Salesforce or via third party products like Marketing Cloud

  4. Inbound synchronous calls to the Salesforce REST or SOAP APIs

  5. Inbound synchronous calls to custom Salesforce web service endpoints

  6. Outbound synchronous calls to external web service endpoints

  7. Outbound asynchronous calls using future methods to external web service endpoints

  8. Outbound asynchronous calls using outbound messages

When I saw this photograph I thought it was a great illustration of where I’ve seen Salesforce clients end up with their integration strategy.  When the project starts everything is orderly and under control.  Then over time development teams introduce different integration patterns which depart from the original architectural blueprint (if one existed at all).  Then the project gets deployed and integrations start firing in earnest.

Combining a myriad of different approaches to Salesforce integration with high volume transactional activity and a large active user base can create a messy outcome.  Salesforce currently lacks strong support for multi-threading control so synchronising data access and modification across a matrix of integration patterns can quickly become problematic.

My advice for projects which need to highly integrate Salesforce is to adopt early the best practice approaches for inbound and outbound integrations.  These can handle bi-directional data exchange in a standardised scalable manner.

Otherwise there will be a lot to clean up!

Credit to Alexander Ilyushin (@chronum) for the puppies photograph which originally illustrated similar outcomes with multi-threaded programming theory and practice:  https://twitter.com/chronum/status/540437976103550976/photo/1

No Comments

What sets you apart when it comes to delivering sustained results is not what you know but what you are capable to learn…

The challenge in staying current with Information Technology comes from how fast the industry continually changes.  Moore’s Law is often quoted “overall processing power for computers will double every two years” but I’ve found extreme evolutionary pace applies to application development as well. 

In my view the pace of change in software engineering is even more significant and harder to keep up with.  Processing power has focussed on doing the same thing faster, smaller and with greater power efficiency.  Software engineering has also invented completely new ways to design, develop, deliver and operate software.

Take Salesforce.com for example which I started working with in 2007.  Since then the platform has become more capable as significant new business functionality has been added three times a year.  Acquisitions like Heroku, Pardot and ExactTarget have broadened the definition of what “Salesforce” means.  Deep expertise across the full Salesforce suite becomes harder and harder to maintain.

What you need to know iStock_000014937781_Double 800x600Maintaining a high level of capability with software platforms like Salesforce means committing to a journey of continual learning and often progressive specialisation.

What I knew at the start of my career about Burroughs B6700 mainframes and PDP-11 mini-computers is now totally irrelevant.  What remains constantly valuable is knowing where and how to research and where and when to ask for help.

In September 2015 I’m off to Dreamforce in San Francisco which will be my fourth pilgrimage to what has become the largest annual IT conference on the planet.  Of course the networking and inspirational keynote speeches will be great, but I go primarily to learn and to absorb a vision of what is coming next.

Being part of the global Salesforce community is an exciting immersion in continual learning!

 

,

No Comments

Cloud Computing: Not Always a Silver Lining

The Happy Promise

Cloud computing comes with an attractive promise – much like the sun shining on lovely white fluffy clouds. Life's good in the cloud the vendor will say. No more infrastructure to purchase, host and maintain. And with a cloud application platform like Salesforce.com no more developers either as all it takes is "clicks not code". Sounds wonderful. And it always is at the start.

Clouds sized

The Reality

Building complex integrated business solutions continues to be complex, especially when the landscape includes legacy monolith systems unaccustomed to "talking" to anything outside the firewall boundary.

Managing the evolution of an enterprise database is also a challenge as business needs change over time. With Salesforce it is incredibly easy to add new entities to your organisation's "database" or add new fields to entities already there. Add-on applications can easily be introduced from the Salesforce AppExchange – each of which adds to the overall system complexity.

The law of entropy (the second law of thermodynamics) applies here – an isolated system will spontaneously evolve towards maximum disorder.

The faster a cloud platform allows changes to be made then the faster the pace towards disorder will occur.

The Common Journey

Particularly with Salesforce.com the journey starts with business stakeholders becoming exasperated with the speed of their IT department. They engage Salesforce and within days their CRM system is up and running. So easy. In hindsight perhaps too easy.

Initially Salesforce is a clean well-structured system without back-office integration. Then the changes start and the clouds start changing their colour. New entities and fields are freely added to the database. Integrations are established with back office systems. Add-on applications are installed.

Entropy kicks in and the march towards disorder begins.

Eventually the integrated system becomes challenged with data synchronisation issues and fields which contradict themselves (should that be an opt-in or opt-out to stay compliant with anti-spam legislation?). Ownership of the system moves progressively from a front-office business unit over to IT. Who for the most part of been kept out of the journey to date and don't understand how clouds work – other than they look black and ugly and threatening.

The pace of evolution slows or stops and the main point of why the cloud based system was introduced is lost in distant history.

What can be Done?

The good news is this outcome is not pre-ordained and need not happen.

Here are some things to consider early in the journey to avoid ending up in a stormy situation:

  1. Accept building complex integrating technology solutions remains complex even with cloud computing and success will require skilled IT professionals to be involved (regardless of whether the vendor assures you that IT won't be needed and it's best not to engage them).
  2. Accept that database design is a specialist skill and establish good data governance from the beginning.
  3. Provide adequate training and mentoring to the group tasked with administering the platform especially if they come from a business rather than technology background.
  4. Realise the law of entropy applies and there is a need to proactively push back against the drift towards chaos. All changes need to be thought through carefully.

Storm Disbursement

If you find yourself no longer living the blue-sky dream with Salesforce.com and need help to disperse the storm clouds which have accumulated during the first few years of use then I'd encourage you to get in contact. After a detailed current state assessment of your Salesforce organisation and integrations it will be possible to plot a path back to the land of the fluffy white clouds again. It may take a while to unpick the chaos but it is always possible.

Richard Clarke, Salesforce Architect and Integration Specialist
Contact me via email: richard.clarke@fuseit.com

sf

, ,

No Comments

Cloud computing integrated with on-premise computing – good blend or bad mongrel?

The evidence of successful cross-breeding is ending up with a blend better than either starting point.  

Get the mix wrong however and you end up with a mongrel.  And whilst mongrels can be loveable and bark as well as a pure breed, they almost never form a good foundation for future generations.

I see this challenge arise in enterprise computing when an organisation ends up with a leg in both cloud computing and traditional on-premise computing, then embarks on a transformation program to bring the two together.  

Traditionally IT departments prefer to stay pure bred (on-premise and under-control) but are forced into cloud computing when business units deploy software-as-a-service which eventually ends up requiring integration.  

The starting point is two pure bred environments which end up being “crossed” – like Salesforce.com in the cloud and SAP on-premise.  

The challenge is to take the best of both to create a perfect blend, without ending up with a regrettable mongrel!

Cross breed or mongrel

 

No Comments