Which information is provided in a .csv file when activating to Amazon S3?
Correct Answer:
B
When activating to Amazon S3, the information that is provided in a .csv file is the activated data payload. The activated data payload is the data that is sent from Data Cloud to the activation target, which in this case is an Amazon S3 bucket1. The activated data payload contains the attributes and values of the individuals or entities that are included in the segment that is being activated2. The activated data payload can be used for various purposes, such as marketing, sales, service, or analytics3. The other options are incorrect because they are not provided in a .csv file when activating to Amazon S3. Option A is incorrect because an audit log is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Activation History tab4. Option C is incorrect because the metadata regarding the segment definition is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Segmentation tab5. Option D is incorrect because the manifest of origin sources within Data Cloud is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Data Sources tab. References: Data Activation Overview, Create and Activate Segments in Data Cloud, Data Activation Use Cases, View Activation History, Segmentation Overview, [Data Sources Overview]
Cumulus Financial is currently using Data Cloud and ingesting transactional data from its backend system via an S3 Connector in upsert mode. During the initial setup six months ago, the company created a formula field in Data Cloud to create a custom classification. It now needs to update this formula to account for more classifications.
What should the consultant keep in mind with regard to formula field updates when using the S3 Connector?
Correct Answer:
D
A formula field is a field that calculates a value based on other fields or constants. When using the S3 Connector to ingest data from an Amazon S3 bucket, Data Cloud supports creating and updating formula fields on the data lake objects (DLOs) that store the data from the S3 source. However, the formula field updates are not applied immediately, but rather at the next incremental upsert refresh of the data stream. An incremental upsert refresh is a process that adds new records and updates existing records from the S3 source to the DLO based on the primary key field. Therefore, the consultant should keep in mind that the formula field updates will affect both new and existing records, but only after the next incremental upsert refresh of the data stream. The other options are incorrect because Data Cloud does not initiate a full refresh of data from S3, does not update the formula only for new records, and does support formula field updates for data streams of type upsert. References: Create a Formula Field, Amazon S3 Connection, Data Lake Object
A user wants to be able to create a multi-dimensional metric to identify unified individual lifetime value (LTV).
Which sequence of data model object (DMO) joins is necessary within the calculated Insight to
enable this calculation?
Correct Answer:
A
To create a multi-dimensional metric to identify unified individual lifetime value (LTV), the sequence of data model object (DMO) joins that is necessary within the calculated Insight is Unified Individual > Unified Link Individual > Sales Order. This is because the Unified Individual DMO represents the unified profile of an individual or entity that is created by identity resolution1. The Unified Link Individual DMO represents the link between a unified individual and an individual from a source system2. The Sales Order DMO represents the sales order information from a source system3. By joining these three DMOs, you can calculate the LTV of a unified individual based on the sales order data from different source systems. The other options are incorrect because they do not join the correct DMOs to enable the LTV calculation. Option B is incorrect because the Individual DMO represents the source profile of an individual or entity from a source system, not the unified profile4. Option C is incorrect because the join order is reversed, and you need to start with the Unified Individual DMO to identify the unified profile. Option D is incorrect because it is missing the Unified Link Individual DMO, which is needed to link the unified profile with the source profile. References: Unified Individual Data Model Object, Unified Link Individual Data Model Object, Sales Order Data Model Object, Individual Data Model Object
A Data Cloud consultant recently added a new data source and mapped some of the data to a new custom data model object (DMO) that they want to use for creating segments. However, they cannot view the newly created DMO when trying to create a new segment.
What is the cause of this issue?
Correct Answer:
B
The cause of this issue is that the new custom data model object (DMO) is not of category Profile. A category is a property of a DMO that defines its purpose and functionality in Data Cloud. There are three categories of DMOs: Profile, Event, and Other.
Profile DMOs are used to store attributes of individuals or entities, such as name, email, address, etc. Event DMOs are used to store actions or interactions of individuals or entities, such as purchases, clicks, visits, etc. Other DMOs are used to store any other type of data that does not fit into the Profile or Event categories, such as products, locations, categories, etc. Only Profile DMOs can be used for creating segments in Data Cloud, as segments are based on the attributes of individuals or entities. Therefore, if the new custom DMO is not of category Profile, it will not appear in the segmentation canvas. The other options are not correct because they are not the cause of this issue. Data ingestion is not a prerequisite for creating segments, as segments can be created based on the data model schema without actual data. The new DMO does not need to have a relationship to the individual DMO, as segments can be created based on any Profile DMO, regardless of its relationship to other DMOs. Segmentation is not only supported for the Individual and Unified Individual DMOs, as segments can be created based on any Profile DMO, including custom ones. References: Create a Custom Data Model Object from an Existing Data Model Object, Create a Segment in Data Cloud, Data Model Object Category
Northern Trail Outfitters uploads new customer data to an Amazon S3 Bucket on a daily basis to be ingested in Data Cloud.
In what order should each process be run to ensure that freshly imported data is ready and available
to use for any segment?
Correct Answer:
D
To ensure that freshly imported data from an Amazon S3 Bucket is ready and available to use for any segment, the following processes should be run in this order:
✑ Refresh Data Stream: This process updates the data lake objects in Data Cloud with the latest data from the source system. It can be configured to run automatically or manually, depending on the data stream settings1. Refreshing the data stream ensures that Data Cloud has the most recent and accurate data from the Amazon S3 Bucket.
✑ Identity Resolution: This process creates unified individual profiles by matching and consolidating source profiles from different data streams based on the identity resolution ruleset. It runs daily by default, but can be triggered manually as well2. Identity resolution ensures that Data Cloud has a single view of each customer across different data sources.
✑ Calculated Insight: This process performs calculations on data lake objects or CRM data and returns a result as a new data object. It can be used to create metrics or measures for segmentation or analysis purposes3. Calculated insights ensure that Data Cloud has the derived data that can be used for personalization or activation.
References:
✑ 1: Configure Data Stream Refresh and Frequency - Salesforce
✑ 2: Identity Resolution Ruleset Processing Results - Salesforce
✑ 3: Calculated Insights - Salesforce