Monday, January 11, 2016

With Sharing, Without Sharing, and You

Using Sharing Responsibly

The "with sharing" and "without sharing" keywords seem to be a poorly understood feature of the salesforce.com platform. Even I have made mistakes in some of the answers I've previously written on the topic. This post will document what the "with sharing" and "without sharing" keywords actually do, when you should use a particular mode, and the consequences of abusing these seemingly simple keywords.

What Is Sharing?


Sharing is what determines if a user can do something with a particular record, based solely on the Share tables. When sharing is enabled in code, all DML operations will be checked against the Share table for any affected records to see if the user is allowed to update that record based solely on the entries in the Share table for the object. This means that a user who tries to update a record when they only have read access to the record won't be able to; they'll get an error.

What Sharing Isn't


Other than a precious few profile permissions, which you could accurately describe as a "sharing rule" granted to that one user, the "with sharing" and "without sharing" keywords mean nothing in regards to enforcing profile permissions, such as the ability for a user to read or write to a particular object or field. In other words, a user that could edit a record if the profile allowed it will still be able to update that record from Apex Code. As developers, we need to make sure that any such updates are either "system actions" that should never fail, or we need to check to see if a user can access a particular object before performing an action on their behalf.

Why Not Use "With Sharing" All The Time?


There's several compelling reasons, but it really boils down to this: most of the time, the system will protect data that needs protecting without us doing anything special to protect the data. In most cases, using neither keyword will result in the correct behavior. There are a few exceptions, however, when you must use "with sharing" or "without sharing," in order to guarantee the correct behavior. Using these keywords all the time has an associated penalty in terms of CPU time, so they should be used sparingly.

When Do I Use "With Sharing"?


If you have code that is called from a user-facing interface, such as a Visualforce page, and it performs a query or performs any DML on behalf of a user, use "With Sharing." Without this keyword, it is possible for users to view or update records they do not have access to. There are a few times when this is desirable, of course, but those are really the exceptions to the rule. If a class does not perform any DML operation or query, do not specify either mode.

When Do I Use "Without Sharing"?


Usually, never. Since the default mode for code is usually "without sharing," there's rarely an opportunity to use this keyword. You'd use it to "break out" of sharing mode (say, to update a normally read-only record) while in sharing mode. This is far less common than you'd think, because usually code called from within sharing mode needs to be with sharing, and code called from without sharing mode needs to be without sharing. If you do need to use this keyword, it may indicate that something else is wrong in your code. Of course, sometimes it's completely unavoidable, but every attempt to be made before resorting to "Without Sharing."

When Can I Omit The Sharing Mode?


You generally only need to specify the sharing mode for classes that actually perform a query or DML operation. Classes that don't fall in either category should not be marked as either with sharing or without sharing. This means they will inherit their permissions from the current mode. The same is also true of trigger utility/helper classes as well as Visualforce helper classes. Generally speaking, there's no need to ever mark a utility class as "with sharing" or "without sharing." It should be able to operate correctly based on its' current sharing mode.

Should I Use Either Mode?


If you're not sure if a specific model should be used, simply ask yourself these questions:

Am I using any DML operations or queries?


If the answer is no, leave the sharing as the default mode, because otherwise you're simply wasting CPU time for nothing. This is true even if the class happens to be a Visualforce page controller.

Does this class act as a controller for a page or component?


If the answer is yes, you should always use "with sharing" to prevent unauthorized updates.

Do I need to expose potentially restricted data, or update records the user may not have access to?


If the answer is yes, you should usually use "without sharing" to allow the unauthorized access. This should only be used as a last resort, because usually the default model is correct.

Summary


Except for Visualforce controllers, most classes should actually be written using the default sharing model. Visualforce pages, custom REST API calls, and the like should specify "with sharing," while most other classes should use the default model. "With Sharing" should only be used when the default model is causing issues that can't be resolved either by fixing sharing rules, profile permissions, etc. In a typical project, the majority of your classes will use the default model, the majority of page controllers will use "with sharing," and a precious few will use "without sharing."

Thursday, January 7, 2016

Converting XML To Native Objects

The Problem

Sometimes, we are forced to work with XML. Sometimes, what we want to do is to load that data into an object so we can use it. Usually, we do not even care about most of the attributes, but simply want the names turned in to fields, and and the values placed in the appropriate fields. It turns out that writing long, convoluted functions that are specific to each XML is incredibly fragile, incredibly hard to troubleshoot when things go wrong, and generally very CPU intensive.

The Solution

I will show you a way that you can effortlessly change XML into a Map, then into JSON, and finally into a native object. It does this in less than 100 lines of code, no matter how big your XML file is (not including the cost of writing the Apex Code you are transforming into, of course). There are some inherent limitations you have to watch out for in this demo version, but I hope that somebody finds this useful. Feel free to tweak this code any way that you see fit for your particular purpose. Also note that this version doesn't handle just strings, but also handles Boolean values, numbers, dates, and times. They will be converted to a native format on a best-effort basis.

The Code

This code has been heavily documented so reader can get a feel for how this code works, so I am not going to spend a lot of time explaining it. Please leave comments if you have any questions.
public class XmlToJson { // Try to determine some data types by pattern static Pattern boolPat = Pattern.compile('^(true|false)$'), decPat = Pattern.compile('^[-+]?\\d+(\\.\\d+)?$'), datePat = Pattern.compile('^\\d{4}.\\d{2}.\\d{2}$'), timePat = Pattern.compile('^\\d{4}.\\d{2}.\\d{2} '+ '(\\d{2}:\\d{2}:\\d{2} ([-+]\\d{2}:\\d{2})?)?$'); // Primary function to decode XML static Map<Object, Object> parseNode(Dom.XmlNode node, Map<Object, Object> parent) { // Iterate over all child elements for a given node for(Dom.XmlNode child: node.getChildElements()) { // Pull out some information String nodeText = child.getText().trim(), name = child.getName(); // Determine data type Object value = // Nothing String.isBlank(nodeText)? null: // Try boolean boolPat.matcher(nodeText).find()? (Object)Boolean.valueOf(nodeText): // Try decimals decPat.matcher(nodeText).find()? (Object)Decimal.valueOf(nodeText): // Try dates datePat.matcher(nodeText).find()? (Object)Date.valueOf(nodeText): // Try times timePat.matcher(nodeText).find()? (Object)DateTime.valueOf(nodeText): // Give up, use plain text (Object)nodeText; // We have some text to process if(value != null) { // We already have a value here, convert it to a list if(parent.containsKey(name)) { try { // We already have a list, so just add it ((List<Object>)parent.get(name)).add(value); } catch(Exception e) { // We don't have a list, so convert to a list parent.put(name, new List<Object>{parent.get(name), value}); } } else { // Store a new value parent.put(name, value); } } else if(child.getNodeType() == Dom.XmlNodeType.ELEMENT) { // If it's not a comment or text, recursively process the data Map<Object, Object> temp = parseNode(child, new Map<Object, Object>()); // If at least one node, add a new element into the array if(!temp.isEmpty()) { // Again, create or update a list if we have a value if(parent.containsKey(name)) { try { // If it's already a list, add it ((List<Object>)parent.get(name)).add(temp); } catch(Exception e) { // Otherwise, convert the element into a list parent.put(name, new List<Object> { parent.get(name), temp }); } } else { // New element parent.put(name, temp); } } } } return parent; } // This function converts XML into a Map public static Map<Object, Object> parseDocumentToMap(Dom.Document doc) { return parseNode(doc.getRootElement(), new Map<Object, Object>()); } // This function converts XML into a JSON string public static String parseDocumentToJson(Dom.Document doc) { return JSON.serialize(parseDocumentToMap(doc)); } // This function converts XML into a native object // If arrays are expected, but not converted automatically, this call may fail // If so, use the parseDocumentToMap function instead and fix any problems public static Object parseDocumentToObject(Dom.Document doc, Type klass) { return JSON.deserialize(parseDocumentToJson(doc), klass); } }

The Unit Test

Of course, no code would be worth having without a unit test, so here is the related unit test that you would use to deploy this code to production. It has 100% coverage, and demonstrates the proper way to write a unit test.
@isTest class XmlToJsonTest { @isTest static void test() { Dom.Document doc = new Dom.Document(); doc.load( '<a>'+ '<b><c>Hello World</c><d>2016-05-01</d><e>2016-05-01 '+
'11:29:00 +03:00</e><f>true</f><g>3.1415</g><h>Two</h><h>Parts</h></b>'+ '<b><c>Hello World</c><d>2016-05-01</d><e>2016-05-01 '+
'11:29:00 +03:00</e><f>true</f><g>3.1415</g><h>Two</h><h>Parts</h></b>'+ '</a>' ); A r = (A)XmlToJson.parseDocumentToObject(doc, a.class); System.assertNotEquals(null, r); System.assertNotEquals(null, r.b); for(Integer i = 0; i != 2; i++) { System.assertNotEquals(null, r.b[i].c); System.assertNotEquals(null, r.b[i].d); System.assertNotEquals(null, r.b[i].e); System.assertNotEquals(null, r.b[i].f); System.assertNotEquals(null, r.b[i].g); System.assertNotEquals(null, r.b[i].h); } } class A { public B[] b; } class B { public String c; public Date d; public DateTime e; public Boolean f; public Decimal g; public String[] h; } }

Warnings

This code may behave oddly if you use anything other than XML formatted as in the example above. Mixing in text, comments, or CDATA in places could cause problems. Also, as commented in the code, the JSON parser may fail if it expects an array and does not find one. This means you may need to some post-processing; that is the reason why there are three functions, so that a developer can stop at any stage of the processing to do some manipulation.

Future Enhancements

Here are a few things I thought of while writing this code, but I did not implement simply because I wanted the code to be as straight-forward as possible:
  • Improve empty element support
  • Assume that members ending in "s" are plural (and thus, automatically create a list)
  • Add CDATA support
  • Provide additional data types, like Blobs.

Conclusion

Just a few lines of code can help you parse a variety of XML formats into native objects. The code runs reasonably fast, and is far less difficult to read than functions that may span hundreds or thousands of lines of code consisting of conditional branches and loops, and also easier to maintain. If you found this code useful, please let me know. If you have any suggestions for improvements (within the limited scope of making this post more useful), I'd like to hear them as well.

Monday, January 4, 2016

Avoid Checking For Non-Existent Nulls

Introduction

At some point, every developer that's worked in Apex Code has received a System.NullPointerException. It's inevitable. Sometimes the situation is figured out quickly, and other times, developers may not know what caused the exception, so developers may start sprinkling null guards everywhere to try and keep it from happening again. Eventually, code may end up with so many of these guards that the code is less readable, more verbose than it should be, and, worst of all, running slower than it could be, sometimes by a significant amount.

In this post, we're going to explore things that are never null, so readers can avoid the most common null checks I've observed in code, which will improve the code's performance. Since we only have so much time to execute code, known as the Apex CPU limit, it's in our best interest to reduce the amount of time our code takes to run. Besides, your users will thank you for a faster, more responsive user interface and API.

Queries

One common theme that I see in code is that developers will check to make sure a query is not null before trying to access it. These checks do carry a penalty in the form of Apex CPU time that's wasted on a check that will never prevent a System.NullPointerException. While it is true that fields returned from the database may be null, depending on their data type, the list returned from a query will never be null. This code only serves to confuse newer developers into thinking that an empty list cannot be returned from a query, thus perpetuating null checks.

Example

Account[] myAccounts = [SELECT Id, Name FROM Account WHERE OwnerId = :UserInfo.getUserId()]; // The query results will never be null. Why did I check this? if(myAccounts != null && myAccounts.size() > 0) { ...

Query Result Elements

Similarly, there seems to be some confusion about what the results in the array may look like. I have actually seen code similar to the following:

for(Account record:[SELECT Id, Name FROM Account]) { if(record != null) { ...

This condition will always be true. A single record from a query will not be null. We do not need to check inpidual records are not somehow null before trying to do something with them.

Query Standard Universally Required Fields

When we query records normally, such as a non-aggregate SOQL call, or any SOSL call, we always get the Id field back in the query, and it will never be null. There's never any reason to check the Id field to see if it is null when it's returned from a query. Obviously, the Id may be null if we cloned the record, cleared the fields, constructed a new record, or accessed a related Id via a parent relationship (e.g. a query that returns Contact records and Account.Id will have a null Id if AccountId is also null). However, unless we've manipulated the records in some way, we can safely assume that the Id is present on the record directly returned from the query.

Query Relationships

One subtle point about the system is that relationships may be null, but do not throw System.NullPointerException if you do not check for them. You can safely avoid checking if a relationship is null if it came from a query, although you will want to check if the field you accessed was null. This only applies to statically compiled references, however, so if you're using dynamic record navigation via SObject.getSObject or SObject.getSObjects, you will want to check for nulls, because you can get an exception.

Examples

// Example 1 for(Contact record: [SELECT Account.Name FROM Contact]) {     if(record.Account.Name != null) { ... // Example 2 Account[] accounts = new Account[0]; for(Account record: [SELECT (SELECT Id FROM Contact) FROM Account]) {     record.No_of_Contacts__c = record.Contacts.size();     accounts.add(record); } update accounts;

Note

Even though relationships are protected against nulls, inpidual fields are not. Do not assume this code is safe:

// If account is null, toUpperCase will fail with an exception. if(contactRecord.Account.Name.toUpperCase().equals('HELLO WORLD')) { ...

Checking Your Own Variables

Generally speaking, you should avoid having null variables. You should be able to tell which values are null or not null based simply on their origins. You should never have to guess about if your own variables are null or not, especially for sets, lists, and maps. Conversely, you should generally assume that any field that comes the database, Visualforce bindings, or callouts may contain nulls, unless it's obvious that a value can not possibly be null. Fields that will never be null include "Id", "CreatedDate", "CreatedById", "LastModifiedDate", "LastModifiedById", and "Name", as well as any field that is Boolean or required by the system.

Learning The System Library

The system library prefers not to return null values. For example, you may safely assume that List.size will always return a positive, non-null value. There is never a need to check if the value returned is null. Generally speaking, any function that will not accept a null value will not return a null value. The following code may safely be run without checking for nulls:

Date firstSundayOfMonth = Date.today().toStartOfMonth().addDays(6).toStartOfWeek();

Each function in that chain returns another Date, so we are guaranteed to receive a valid date value in firstSundayOfMonth. Most other functions also follow this behavior; the usual way to signal a bad input value is by way of exceptions, so as long as you're checking the parameters you pass to the system library, most functions will never return a null value. Those functions that do are more of an exception than the rule. In fact, those methods are usually explicitly documented as returning a null value when possible.

Conclusion

Apex Code is strongly typed, but not as optimized as Java, so taking a few extra moments to learn which functions return a null value, and which do not, will go a long way in writing code that is easier to read and maintain, will be less likely to run in to governor limits, and should be easier to maintain code coverage for.