Take a Closer Look at SuiteScript in SuiteCommerce Advanced

The frontend SuiteCommerce Advanced code is just one detail in the larger picture of your site's success on NetSuite. Another crucial part is SuiteScript — the JavaScript-like code that runs on our servers. We use it to connect the frontend of your site to our system's databases, allowing you to perform CRUD operations on records.

SuiteScript knowledge is essential to any customization work you're doing that involves records; in other words, if you're a frontend developer who just designs and tinkers with frontend components, you'll be fine. For everyone else — for people who must write code that interacts with the backend — you'll have to learn, and pick it up.

We teach SuiteScript, both separately and as part of our SCA training courses should you wish to study it intensively. You can also take a look some of the tutorials that use SuiteScript, such as the artists module and the testimonials module. Don't forget that we also have documentation on SuiteScript in the help center.

In the meantime, however, and assuming you have a working knowledge of SuiteScript, let's take a look at some tasty SuiteScript tidbits.

Record Structure

Records in SCA are represented in JSON objects. When you send data to NetSuite, it must be JSON; when you receive it, it's as JSON.

Let's take a look at a very simple version of a create method in a backend model:

create: function(data) {
  this.validate(data);
  var record = nlapiCreateRecord('customrecord_somethingimade');
  record.setFieldValue('name', data.name);
  record.setFieldValue('custrecord_something', data.something);
  record.setFieldValue('custrecord_owner', nlapiGetUser());
  return nlapiSubmitRecord(record);
}

When the create method is called, it's passed the data from the frontend model. Normally, the first thing we recommend doing is running a validation check, which we'll talk about later. Then we run nlapiCreateRecord.

nlapiCreateRecord initializes a new record of the type you specify; in my fictitious example, it'll initialize the custom record called customrecord_somethingimade.

From there we then set the field values that we want. Some fields, such as the name, will be standard — some records will have fields that are custom, and we can set those. Note how we pull these values from the data parameter that we passed through.

Another thing to note is that we use nlapiGetUser() — this clever little function returns the internal id of the current user, which is necessary when creating new records (so the system knows to whom it should associate the record).

Lastly, we return a finalized record as a parameter to the nlapiSubmitRecord function. It's a requirement that you do this: simply initializing a record with nlapiCreateRecord is not enough to create a new record.

Updating Records

The structure for a request to update a record is very similar to the create one. The main differences are:

  1. Before you pass the data, you pass the ID of the record you want to access
  2. Instead of nlapiCreateRecord, you use nlapiLoadRecord, which requires the ID of the record in addition to the record type
  3. You do not need to set the owner because we loaded an existing record, which already has an owner set

Other than that, the structure is largely the same.

nlapiSubmitField

As I just mentioned, most update methods follow a similar pattern to create ones: create/load a record, enter the field values, and then submit them.

However, this is inefficient if you know exactly what you want to do — why pull down existing details (of which there could be many) when all you want to do is change the value of one field?

This is where nlapiSubmitField comes in. This function lets you submit a value for a single field without having to load the record first. Super handy.

Before you dive right in and use it, note that it is only appropriate (and efficient) in specific circumstances. It only works when you're editing body fields (ie non-sublist fields), and is only performant when that field supports inline editing.

We ourselves only use it in a couple of places through the SCA source code. For example, in the model for the Newsletter module:

_.each(customers_to_subscribe, function (subscriber)
{
  nlapiSubmitField('customer', subscriber.id, 'globalsubscriptionstatus', 1, false);
});

Overall, what this function does it take a list of customers and updates the subscription status for each one. For each customer, we only need to change the value of one field. The order of the fields are:

  1. 'customer' — the ID of the record type
  2. subscriber.id — the ID of the record you want to update
  3. 'globalsubscriptionstatus' — the ID of the field you want to update
  4. 1 — the value you want to change the field to
  5. false — in this instance, we are not using field sourcing

Another way to look at this is to go the backend UI for a list of records. If you can input a new value into a field without having to go off and select values from a list, then you should be good.

It's worth noting that you can use the same request to update multiple fields of the same record. You can specify arrays for both the field IDs and their values — all you need to do is make sure that the order of the IDs and values is the same across both arrays. For an example of this, check out the model for the Case module.

Performance

So I said that this was good for performance. Note that this is only the case when you use it appropriately. This method can still be used to submit fields that don't support inline editing, but it will undo any of the performance savings you would have made.

For example, let's say you submit three fields through nlapiSubmitField but one of them is not inline-editable, then the system must load the record, set the field values and then submit it again — you lose the performance gains you would have gotten.

Code Style

One small little thing I want to add on is how you present your code. If you're just updating one field that putting everything together one on one line will be simple enough. However, if you're going to update multiple fields, then you may wish to present your code in a more readable format.

Firstly, don't forget that you can use variables in place of the actual arrays. So you could write something like this:

nlapiSubmitField('customer', nlapiGetUser(), custFields, custValues);

And then, secondly, you can set the values of these arrays by either using something fancy, like _.map() or something like this:

var custFields = newArray()
, custValues = newArray();

custFields[0] = 'phone';
custValues[0] = data.phone;
custFields[1] = 'url';
custValues[1] = data.url;
custFields[2] = 'billpay';
custValues[2] = data.billpay;

Here you can see how we have neatly grouped the relevant array field IDs and values so that it's easy to keep track of what we're updating. I'm not saying you have to do it this way, but if you're the type of person who really wants that clarity then it could certainly help.

Validation

Let's talk data integrity. You need to validate the data you send to NetSuite to ensure that it is sane and conforms to standards. We've built in a number of validations that our core modules use, which you can also use.

Validation is provided by a third-party library, Backbone.Validation. With this installed, we can then specify rules to be applied, and invoke our backend models to validate the data.

To specify what rules to apply, you specify a validation object after extending SCModel, eg:

return SCModel.extend({
  validation: {
    'name': {
      required: true,
      msg: 'Please enter a name'
    },
    'email': {
      required: true,
      pattern: 'email',
      msg: 'Please enter a valid email'
    }
  },

It's important to note that the names of each of the nested objects refer to the specific labels in the data that are being used.

Things like required, pattern and msg are 'standard' fields in the validation library file. You can take a look at the documentation or the file itself (backbone-validation.js) for information on all the standard validation rules that are available.

If you want to add your own validation rules, you can; you can set these either on a per-model basis, or by setting them globally and then invoking them on your model. For example, if I wanted to add a custom validator for the value of name in the above example, then I could replace it with the following:

validation: {
  'name': function(value, attr, computedState) {
    if(value !== 'something') {
      return 'Name is invalid';
    }
  }
}

In this scenario, my function will run on the value of the name I give it. You can also re-use ad-hoc functions by naming them. For example:

validation: {
  'name': 'validateName'
},
validateName: function(value, attr, computedState) {
  if(value !== 'something') {
    return 'Name is invalid';
  }
}

And, finally, you can create custom 'global' validators by extending the validators object (you could also use this to override existing validators, if you want to change them):

_.extend(Backbone.Validation.validators, {
  myValidator: function(value, attr, customValue, model) {
    if(value !== customValue){
      return 'error';
    }
  },
  required: function(value, attr, customValue, model) {
    if(!value){
      return 'My version of the required validator';
    }
  },
});

In this example, I've added a new custom validator (myValidator) and I've overwritten the required validator with custom code.

And then, finally, to call our validations, we just run this.validate(data) (where data is the name of parameter passed to the method).

Events: Before and After

The way the backend architecture works, you can add in events that trigger before and after a particular method is called. In order to enable it for your model, you'll first need to add in Application as a dependency. With that, you gain access to three methods:

  1. on — which allows you to assign an event
  2. off — which allows you to remove an event assignment
  3. trigger — which you can use to trigger an event you've assigned

I talked about the modern implementation of this when I discussed the new service controllers a little while ago. To cut right to the chase, you can extend a service controller and then add in the following code:

Application.on('before:ProductReviews.ServiceController.get', function()
{
  console.log('Before you get');
});

Application.on('after:ProductReviews.ServiceController.get', function()
{
  console.log('After you get');
});

Here you can see that we're assigning two new events: one that fires after the ProductReviews service controller does a GET, and another after it does a GET. In our examples, we're just doing some console logs, but you could use this opportunity to do whatever you want.

Console

On the subject on console logs, you'll note that there are two ways of logging data in the execution log:

nlapiLogExecution('DEBUG', 'Title', 'Some words what I wrote');
// or
console.log('Some words what I wrote');

While you'll think that the second one is more natural to you, it's only that way because our team put in place SSPLibraries > Console.js. This file adds in support for serverside console commands, which means that you needn't bother, if you like, with the more verbose, proprietary command. Just note how we map certain methods to each other:

  • console.log > nlapiLogExecution('DEBUG')
  • console.info > nlapiLogExecution('AUDIT')
  • console.error > nlapiLogExecution('EMERGENCY')
  • console.warn > nlapiLogExecution('ERROR')

Also, when you use one of those above methods, it will split any arguments you provide it: the first one is used to set the title of the log, while the second is used to set the detail. For example:

console.log('Hey!','Check out this debug log!');

Which creates:

Super handy if you ever need log things in a particular way to the execution log without having to learn all the associated lingo that NetSuite requires.

Connection Between Frontend Models and Backend Models

A quick note on this: I know that I always wondered what causes the two to be linked; you know, how does Backbone know that a backend model relates to a frontend one?

The answer lies in the use of internalid: this signifies the connection and must be maintained if you want UI actions to make changes to frontend models and then backend models.

If you did my tutorial on showing a shopper their product reviews with product data then you may remember the following code in the backend model:

var results = _.map(search, function(result) {
  return {
    reviewid: result.getValue('internalid')
  , rating: result.getValue('custrecord_ns_prr_rating')
  , text: result.getValue('custrecord_ns_prr_text')
  , itemid: result.getValue('custrecord_ns_prr_item_id')
  , created: result.getValue('created').split(" ", 1)
  }
});

Yes, I have mapped the internalid value to a new value (reviewid). Now, this was deliberate as amalgamating the two sets of data together created a little bit of an issue as both data sets had a value for internalid and I didn't want one to overwrite the other.

If I was going to let the user perform UI actions on this data, which then required POST/PUT methods, then I couldn't have done this as the mapping would have been lost.

This support is partly from Backbone and partly from an intervention we made. Backbone supports something it calls the idAttribute which allows for this persistence through the frontend/backend models. We actually specify internalid as the value to use in BackboneExtrasBackbone.Model.js.

More Information