A Guided Tour
Relay is a framework for managing and declaratively fetching GraphQL data. Specifically, it provides a set of APIs to fetch and declare data dependencies for React components, in colocation with component definitions themselves.
In this guide, we're going to go over how to use Relay to build out some of the more common use cases in apps. If you're interested in a detailed reference of our APIs, check out our API Reference page. Before getting started, bear in mind that we assume some level of familiarity with JavaScript, React, GraphQL, and assume that you have set up a GraphQL Server that adheres to the Relay specification
Example App
To see a full example using Relay Hooks and our integration with Suspense for Data Fetching, check out: relay-examples/issue-tracker.
Setup and Workflow
In case you've never worked with Relay before, here's a rundown of what you need to set up to get up and running developing with Relay:
Installation
Install the experimental versions of React and Relay using yarn
or npm
:
yarn add react@experimental react-dom@experimental react-relay@experimental
Babel plugin
Relay requires a Babel plugin to process graphql
tags inside your JavaScript code:
yarn add --dev babel-plugin-relay graphql
Add "relay"
to the list of plugins in your .babelrc
file:
{
"plugins": [
"relay"
]
}
Please note that the "relay"
plugin should run before other plugins or
presets to ensure the graphql
template literals are correctly transformed. See
Babel's documentation on this topic.
Alternatively, instead of using babel-plugin-relay
, you can use Relay with babel-plugin-macros. After installing babel-plugin-macros
and adding it to your Babel config:
const graphql = require('babel-plugin-relay/macro');
If you need to configure babel-plugin-relay
further, you can do so by specifying the options in a number of ways.
Relay Compiler
Whenever you're developing Relay components, for example by writing Fragments or Queries, you will need to run the Relay Compiler. The Relay Compiler will read and analyze any graphql
inside your JavaScript code, and produce a set of artifacts that will be used by Relay at runtime (i.e. when the application is running on the browser).
To install the compiler, you can use yarn
or npm
:
yarn add --dev relay-compiler
This installs the bin script relay-compiler
in your node_modules
folder. It's recommended to run this from a yarn
/npm
script by adding a script to your package.json
file:
"scripts": {
"relay": "relay-compiler --src ./src --schema ./schema.graphql"
}
or if you are using jsx:
"scripts": {
"relay": "relay-compiler --src ./src --schema ./schema.graphql --extensions js jsx"
}
Then, whenever you've made edits to your application files, you can run the relay
script to run the compiler and generate new compiled artifacts:
# Single run
yarn run relay
You can also pass the --watch
option to watch for changes in your application files and automatically re-compile the artifacts (Note: Requires watchman to be installed):
# Watch for changes
yarn run relay --watch
Config file
The configuration of babel-plugin-relay
and relay-compiler
can be applied using a single configuration file by
using the relay-config
package. Besides unifying all Relay configuration in a single place, other tooling can leverage this to provide zero-config setup (e.g. vscode-apollo-relay).
Install the package:
yarn add --dev relay-config
And create the configuration file:
// relay.config.js
module.exports = {
// ...
// Configuration options accepted by the `relay-compiler` command-line tool and `babel-plugin-relay`.
src: "./src",
schema: "./data/schema.graphql",
exclude: ["**/node_modules/**", "**/__mocks__/**", "**/__generated__/**"],
}
Rendering Data Basics
Fragments
The main building block for declaring data dependencies for React Components in Relay are GraphQL fragments, which are essentially a selection of fields on a GraphQL Type:
fragment UserFragment on User {
name
age
profile_picture(scale: 2) {
uri
}
}
In order to declare a fragment inside your JavaScript code, you must use the graphql
tag:
const {graphql} = require('react-relay/hooks');
const userFragment = graphql`
fragment UserFragment on User {
name
age
profile_picture(scale: 2) {
uri
}
}
`;
In order to render the data for a fragment, you can use the useFragment
Hook:
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
type Props = {|
user: UserComponent_user$key,
|};
function UserComponent(props: Props) {
const data = useFragment(
graphql`
fragment UserComponent_user on User {
name
profile_picture(scale: 2) {
uri
}
}
`,
props.user,
);
return (
<>
<h1>{data.name}</h1>
<div>
<img src={data.profile_picture?.uri} />
</div>
</>
);
}
module.exports = UserComponent;
Let's distill what's going on here:
useFragment
takes a fragment definition and a fragment reference, and returns the correspondingdata
for that fragment and reference.- A fragment reference is an object that Relay uses to read the data declared in the fragment definition; as you can see, the
UserComponent_user
fragment itself just declares fields on theUser
type, but we need to know which specific user to read those fields from; this is what the fragment reference corresponds to. In other words, a fragment reference is like a pointer to a specific instance of a type that we want to read data from. - Note that the component is automatically subscribed to updates to the fragment data: if the data for this particular
User
is updated anywhere in the app (e.g. via fetching new data, or mutating existing data), the component will automatically re-render with the latest updated data. - Relay will automatically generate Flow types for any declared fragments when the compiler is run, so you can use these types to declare the type for your Component's
props
.- The generated Flow types include a type for the fragment reference, which is the type with the
$key
suffix:<fragment_name>$key
, and a type for the shape of the data, which is the type with the$data
suffix:<fragment_name>$data
; these types are available to import from files that are generated with the following name:<fragment_name>.graphql.js
. - We use our lint rule to enforce that the type of the fragment reference prop is correctly declared when using
useFragment
. By using a properly typed fragment reference as input, the type of the returneddata
will automatically be Flow typed without requiring an explicit annotation. - In our example, we're typing the
user
prop as the fragment reference we need foruseFragment
, which corresponds to theUserComponent_user$key
imported fromUserComponent_user.graphql
, which means that the type ofdata
above would be:{| name: ?string, profile_picture: ?{| uri: ?string |} |}
.
- The generated Flow types include a type for the fragment reference, which is the type with the
- Fragment names need to be globally unique. In order to easily achieve this, we name fragments using the following convention based on the module name followed by an identifier:
<module_name>_<property_name>
. This makes it easy to identify which fragments are defined in which modules and avoids name collisions when multiple fragments are defined in the same module.
If you need to render data from multiple fragments inside the same component, you can use useFragment
multiple times:
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
import type {UserComponent_viewer$key} from 'UserComponent_viewer.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
type Props = {|
user: UserComponent_user$key,
viewer: UserComponent_viewer$key,
|};
function UserComponent(props: Props) {
const userData = useFragment(
graphql`
fragment UserComponent_user on User {
name
profile_picture(scale: 2) {
uri
}
}
`,
props.user,
);
const viewerData = useFragment(
graphql`
fragment UserComponent_viewer on Viewer {
actor {
name
}
}
`,
props.viewer,
);
return (
<>
<h1>{userData.name}</h1>
<div>
<img src={userData.profile_picture?.uri} />
Acting as: {viewerData.actor?.name ?? 'Unknown'}
</div>
</>
);
}
module.exports = UserComponent;
Composing Fragments
In GraphQL, fragments are reusable units, which means they can include other fragments, and consequently a fragment can be included within other fragments or Queries:
fragment UserFragment on User {
name
age
profile_picture(scale: 2) {
uri
}
...AnotherUserFragment
}
fragment AnotherUserFragment on User {
username
...FooUserFragment
}
With Relay, you can compose fragment components in a similar way, using both component composition and fragment composition. Each React component is responsible for fetching the data dependencies of its direct children - just as it has to know about its children's props in order to render them correctly. This pattern means that developers are able to reason locally about components - what data they need, what components they render - but Relay is able to derive a global view of the data dependencies of an entire UI tree.
/**
* UsernameSection.react.js
*
* Child Fragment Component
*/
import type {UsernameSection_user$key} from 'UsernameSection_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
type Props = {|
user: UsernameSection_user$key,
|};
function UsernameSection(props: Props) {
const data = useFragment(
graphql`
fragment UsernameSection_user on User {
username
}
`,
props.user,
);
return <div>{data.username ?? 'Unknown'}</div>;
}
module.exports = UsernameSection;
/**
* UserComponent.react.js
*
* Parent Fragment Component
*/
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
const UsernameSection = require('./UsernameSection.react');
type Props = {|
user: UserComponent_user$key,
|};
function UserComponent(props: Props) {
const user = useFragment(
graphql`
fragment UserComponent_user on User {
name
age
profile_picture(scale: 2) {
uri
}
# Include child fragment:
...UsernameSection_user
}
`,
props.user,
);
return (
<>
<h1>{user.name}</h1>
<div>
<img src={user.profile_picture?.uri} />
{user.age}
{/* Render child component, passing the _fragment reference_: */}
<UsernameSection user={user}/>
</div>
</>
);
}
module.exports = UserComponent;
There are a few things to note here:
UserComponent
both rendersUsernameSection
, and includes the fragment declared byUsernameSection
inside its owngraphql
fragment declaration.UsernameSection
expects a fragment reference as theuser
prop. As we've mentioned before, a fragment reference is an object that Relay uses to read the data declared in the fragment definition; as you can see, the childUsernameSection_user
fragment itself just declares fields on theUser
type, but we need to know which specific user to read those fields from; this is what the fragment reference corresponds to. In other words, a fragment reference is like a pointer to a specific instance of a type that we want to read data from.- Note that in this case the
user
passed toUsernameSection
, i.e. the fragment reference, doesn't actually contain any of the data declared by the childUsernameSection
component; instead,UsernameSection
will use the fragment reference to read the data it declared internally, usinguseFragment
. This prevents the parent from implicitly creating dependencies on data declared by its children, and vice-versa, which allows us to reason locally about our components and modify them without worrying about affecting other components. If this wasn't the case, and the parent had access to the child's data, modifying the data declared by the child could break the parent. This is known as data masking. - The fragment reference that the child (i.e.
UsernameSection
) expects is the result of reading a parent fragment that includes the child fragment. In our particular example, that means the result of reading a fragment that includes...UsernameSection_user
will be the fragment reference thatUsernameSection
expects. In other words, the data obtained as a result of reading a fragment viauseFragment
also serves as the fragment reference for any child fragments included in that fragment.
Queries
A GraphQL query is a request that can be sent to a GraphQL server in combination with a set of Variables, in order to fetch some data. It consists of a selection of fields, and potentially includes other fragments:
query UserQuery($id: ID!) {
user(id: $id) {
id
name
...UserFragment
}
viewer {
actor {
name
}
}
}
fragment UserFragment on User {
username
}
Sample response:
{
"data": {
"user": {
"id": "4",
"name": "Mark Zuckerberg",
"username": "zuck"
},
"viewer": {
"actor": {
"name": "Your Name"
}
}
}
}
NOTE: Fragments in Relay allow declaring data dependencies for a component, but they can't be fetched by themselves; they need to be included by a query, either directly or transitively. This implies that all fragments must belong to a query when they are rendered, or in other words, they must be rooted under some query. Note that a single fragment can still be included by multiple queries, but when rendering a specific instance of a fragment component, it must have been included as part of a specific query request.
To fetch and render a query in Relay, you can use useLazyLoadQuery
Hook:
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
function App() {
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
{id: '4'},
);
return (
<h1>{data.user?.name}</h1>
);
}
Lets see what's going on here:
useLazyLoadQuery
takes agraphql
query and some variables for that query, and returns the data that was fetched for that query. Thevariables
are an object containing the values for the Variables referenced inside the GraphQL query.- Similarly to fragments, the component is automatically subscribed to updates to the query data: if the data for this query is updated anywhere in the app, the component will automatically re-render with the latest updated data.
useLazyLoadQuery
additionally, it takes a Flow type parameter, which corresponds to the Flow type for the query, in this caseAppQuery
.- Remember that Relay automatically generates Flow types for any declared queries, which you can import and use with
useLazyLoadQuery
. These types are available in the generated files with the following name format:<query_name>.graphql.js
. - Note that the
variables
will checked by Flow to ensure that you are passing values that match what the GraphQL query expects. - Note that the
data
is already properly Flow typed without requiring an explicit annotation, and is based on the types from the GraphQL schema. For example, the type ofdata
above would be:{| user: ?{| name: ?string |} |}
.
- Remember that Relay automatically generates Flow types for any declared queries, which you can import and use with
- By default, when the component renders, Relay will automatically fetch the data for this query from the server (if it isn't already cached), and return it as a the result of the
useLazyLoadQuery
call. We'll go into more detail about how to show loading states in the Loading States With Suspense section, and how Relay uses cached data in the Reusing Cached Data for Render section. - Note that if you re-render your component and pass different query variables than the ones originally used, it will cause the query to be fetched again with the new variables, and potentially re-render with different data.
- Finally, make sure you're providing a Relay environment at the root of your app before trying to render a query: Relay Environment Provider.
To fetch and render a query that includes a fragment, you can compose them in the same way fragments are composed, as shown in the Composing Fragments section:
/**
* UserComponent.react.js
*
* Fragment Component
*/
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
type Props = {|
user: UserComponent_user$key,
|};
function UserComponent(props: Props) {
const data = useFragment(
graphql`...`,
props.user,
);
return (...);
}
module.exports = UserComponent;
/**
* App.react.js
*
* Query Component
*/
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
const UserComponent = require('./UserComponent.react');
function App() {
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
# Include child fragment:
...UserComponent_user
}
}
`,
{id: '4'},
);
return (
<>
<h1>{data.user?.name}</h1>
{/* Render child component, passing the fragment reference: */}
<UserComponent user={data.user} />
</>
);
}
Note that:
- The fragment reference that
UserComponent
expects is is the result of reading a parent query that includes its fragment, which in our case means a query that includes...UsernameSection_user
. In other words, thedata
obtained as a result ofuseLazyLoadQuery
also serves as the fragment reference for any child fragments included in that query. - As mentioned previously, all fragments must belong to a query when they are rendered, which means that all fragment components must be descendants of a query. This guarantees that you will always be able to provide a fragment reference for
useFragment
, by starting from the result of reading a root query withuseLazyLoadQuery
.
Variables
You may have noticed that the query declarations in our examples above contain references to an $id
symbol inside the GraphQL code: these are GraphQL Variables.
GraphQL variables are a construct that allows referencing dynamic values inside a GraphQL query. When fetching a query from the server, we also need to provide as input the actual set of values to use for the variables declared inside the query:
# `$id` is a variable of type `ID!`
query UserQuery($id: ID!) {
# The value of `$id` is used as input to the user() call:
user(id: $id) {
id
name
}
}
When sending a network request to fetch the query above, we need to provide both the query, and the variables to be used for this particular execution of the query. For example:
# Query:
query UserQuery($id: ID!) {
# ...
}
# Variables:
{"id": 4}
Fetching the above query and variables from the server would produce the following response:
{
"data": {
"user": {
"id": "4",
"name": "User 4"
}
}
}
- Note that changing the value of the
id
variable used as input would of course produce a different response.
Fragments can also reference variables that have been declared by a query:
fragment UserFragment on User {
name
profile_picture(scale: $scale) {
uri
}
}
query ViewerQuery($scale: Float!) {
viewer {
actor {
...UserFragment
}
}
}
- Even though the fragment above doesn't declare the
$scale
variable directly, it can still reference it. Doing so makes it so any query that includes this fragment, either directly or transitively, must declare the variable and it's type, otherwise an error will be produced by the Relay compiler. - In other words, query variables are available globally by any fragment that is a descendant of the query.
In Relay, fragment declarations inside components can also reference query variables:
function UserComponent(props: Props) {
const data = useFragment(
graphql`
fragment UserComponent_user on User {
name
profile_picture(scale: $scale) {
uri
}
}
`,
props.user,
);
return (...);
}
- The above fragment could be included by multiple queries, and rendered by different components, which means that any query that ends up rendering/including the above fragment must declare the
$scale
variable. - If any query that happens to include this fragment doesn't declare the
$scale
variable, an error will be produced by the Relay Compiler at build time, ensuring that an incorrect query never gets sent to the server (sending a query with missing variable declarations will also produce an error in the server).
@arguments
and @argumentDefinitions
However, in order to prevent bloating queries with global variable declarations, Relay also provides a way to declare variables that are scoped locally to a fragment using the @arguments
and @argumentDefinitions
directives:
/**
* Declare a fragment that accepts arguments with @argumentDefinitions
*/
function PictureComponent(props) {
const data = useFragment(
graphql`
fragment PictureComponent_user on User
@argumentDefinitions(scale: {type: "Float!"}) {
# `$scale` is a local variable here, declared above
# as an argument `scale`, of type `Float!`
profile_picture(scale: $scale) {
uri
}
}
`,
props.user,
);
}
/**
* Include fragment using @arguments
*/
function UserComponent(props) {
const data = useFragment(
graphql`
fragment UserComponent_user on User {
name
# Pass value of 2.0 for the `scale` variable
...PictureComponent_user @arguments(scale: 2.0)
}
`,
props.user,
);
}
/**
* Include same fragment using _different_ @arguments
*/
function OtherUserComponent(props) {
const data = useFragment(
graphql`
fragment OtherUserComponent_user on User {
name
# Pass a different value for the scale variable.
# The value can be another local or global variable:
...PictureComponent_user @arguments(scale: $pictureScale)
}
`,
props.user,
);
}
- Note that when passing
@arguments
to a fragment, we can pass a literal value or pass another variable. The variable can be a global query variable, or another local variable declared via@argumentDefinitions
. - When we actually fetch
PictureComponent_user
as part of a query, thescale
value passed to theprofile_picture
field will depend on the argument that was provided by the parent ofPictureComponent_user
:- For
UserComponent_user
the value of$scale
will be 2.0. - For
OtherUserComponent_user
, the value of$scale
will be whatever value we pass to the server for the$pictureScale
variable when we fetch the query.
- For
Fragments that expect arguments can also declare default values, making the arguments optional:
/**
* Declare a fragment that accepts arguments with default values
*/
function PictureComponent(props) {
const data = useFragment(
graphql`
fragment PictureComponent_user on User
@argumentDefinitions(scale: {type: "Float!", defaultValue: 2.0}) {
# `$scale` is a local variable here, declared above
# as an argument `scale`, of type `Float!` with a default value of `2.0`
profile_picture(scale: $scale) {
uri
}
}
`,
props.user,
);
}
function UserComponent(props) {
const data = useFragment(
graphql`
fragment UserComponent_user on User {
name
# Do not pass an argument, value for scale will be `2.0`
...PictureComponent_user
}
`,
props.user,
);
}
- Not passing the argument to
PictureComponent_user
makes it use the default value for its locally declared$scale
variable, in this case 2.0.
Accessing GraphQL Variables At Runtime
If you want to access the variables that were set at the query root, the recommended approach is to pass the variables down the component tree in your application, using props, or your own application-specific context.
Relay currently does not expose the resolved variables (i.e. after applying argument definitions) for a specific fragment, and you should very rarely need to do so.
Loading States with Suspense
As you may have noticed, we mentioned that using useLazyLoadQuery
will fetch a query from the server, but we didn't elaborate on how to render a loading UI while the query is being loaded. We will cover that in this section.
To render loading states while a query is being fetched, we rely on React Suspense. Suspense is a new feature in React that allows components to interrupt or "suspend" rendering in order to wait for some asynchronous resource (such as code, images or data) to be loaded; when a component "suspends", it indicates to React that the component isn't "ready" to be rendered yet, and wont be until the asynchronous resource it's waiting for is loaded. When the resource finally loads, React will try to render the component again.
This capability is useful for components to express asynchronous dependencies like data, code, or images that they require in order to render, and lets React coordinate rendering the loading states across a component tree as these asynchronous resources become available. More generally, the use of Suspense give us better control to implement more deliberately designed loading states when our app is loading for the first time or when it's transitioning to different states, and helps prevent accidental flickering of loading elements (such as spinners), which can commonly occur when loading sequences aren't explicitly designed and coordinated.
For a lot more details on Suspense, check the React docs on Suspense.
Loading fallbacks with Suspense Boundaries
When a component is suspended, we need to render a fallback in place of the component while we await for it to become "ready". In order to do so, we use the Suspense
component provided by React:
const React = require('React');
const {Suspense} = require('React');
function App() {
return (
// Render a fallback using Suspense as a wrapper
<Suspense fallback={<LoadingSpinner />}>
<CanSuspend />
</Suspense>
);
}
Suspense
components can be used to wrap any component; if the target component suspends, Suspense
will render the provided fallback until all its descendants become "ready" (i.e. until all of the promises thrown inside its subtree of descendants resolve). Usually, the fallback is used to render a loading state, such as a glimmer.
Usually, different pieces of content in our app might suspend, so we can show loading state until they are resolved by using Suspense
:
/**
* App.react.js
*/
const React = require('React');
const {Suspense} = require('React');
const LoadingSpinner = require('./LoadingSpinner.react');
const MainContent = require('./MainContent.react');
function App() {
return (
// LoadingSpinner is rendered via the Suspense fallback
<Suspense fallback={<LoadingSpinner />}>
<MainContent /> {/* MainContent may suspend */}
</Suspense>
);
}
Let's distill what's going on here:
- If
MainContent
suspends because it's waiting on some asynchronous resource (like data), theSuspense
component that wrapsMainContent
will detect that it suspended, and will render thefallback
element (i.e. theLoadingSpinner
in this case) up untilMainContent
is ready to be rendered. Note that this also transitively includes descendants ofMainContent
, which might also suspend.
What's nice about Suspense is that you have granular control about how to accumulate loading states for different parts of your component tree:
/**
* App.react.js
*/
const React = require('React');
const {Suspense} = require('React');
const LoadingSpinner = require('./LoadingSpinner.react');
const MainContent = require('./MainContent.react');
const SecondaryContent = require('./SecondaryContent.react');
function App() {
return (
// A LoadingSpinner for *_all_* content is rendered via the Suspense fallback
<Suspense fallback={<LoadingSpinner />}>
<MainContent />
<SecondaryContent /> *{/* SecondaryContent can also suspend */}*
</Suspense>
);
}
- In this case, both
MainContent
andSecondaryContent
may suspend while they load their asynchronous resources; by wrapping both in aSuspense
, we can show a single loading state up until they are all ready, and then render the entire content in a single paint, after everything has successfully loaded. - In fact,
MainContent
andSecondaryContent
may suspend for different reasons other than fetching data, but the sameSuspense
component can be used to render a fallback up until all components in the subtree are ready to be rendered. Note that this also transitively includes descendants ofMainContent
orSecondaryContent
, which might also suspend.
Conversely, you can also decide to be more granular about your loading UI and wrap Suspense components around smaller or individual parts of your component tree:
/**
* App.react.js
*/
const React = require('React');
const {Suspense} = require('React');
const LoadingSpinner = require('./LoadingSpinner.react');
const LeftColumn = require('./LeftHandColumn.react');
const LeftColumnPlaceholder = require('./LeftHandColumnPlaceholder.react');
const MainContent = require('./MainContent.react');
const SecondaryContent = require('./SecondaryContent.react');
function App() {
return (
<>
{/* Show a separate loading UI for the LeftHandColumn */}
<Suspense fallback={<LeftColumnPlaceholder />}>
<LeftColumn />
</Suspense>
{/* Show a separate loading UI for both the Main and Secondary content */}
<Suspense fallback={<LoadingSpinner />}>
<MainContent />
<SecondaryContent />
</Suspense>
</>
);
}
- In this case, we're showing 2 separate loading UIs:
- One to be shown until the
LeftColumn
becomes ready. - And one to be shown until both the
MainContent
andSecondaryContent
become ready.
- One to be shown until the
- What is powerful about this is that by more granularly wrapping our components in Suspense, we allow other components to be rendered earlier as they become ready. In our example, by separately wrapping
MainContent
andSecondaryContent
underSuspense
, we're allowingLeftColumn
to render as soon as it becomes ready, which might be earlier than when the content sections become ready.
Transitions and Updates that Suspend
Suspense
boundary fallbacks allow us to describe our loading states when initially rendering some content, but our applications will also have transitions between different content. Specifically, when switching between two components within an already mounted boundary, the new component you're switching to might not have loaded all of its async dependencies, which means that it will also suspend.
Whenever we're going to make a transition that might cause new content to suspend, we should use the useTransition
to schedule the update for transition:
const {useTransition} = require('React');
function TabSwitcher() {
// We use startTransition to schedule the update
const [startTransition] = useTransition();
const [selectedTab, setSelectedTab] = useState('Home');
return (
<div>
<Suspense fallback={<LoadingGlimmer />}>
<MainContent tab={selectedTab} />
</Suspense>
<Button
onClick={() =>
startTransition(() => {
// Schedule an update that might suspend
setSelectedTab('Photos');
})
}>
Show Photos
</Button>
</div>
);
}
Let's take a look at what's happening here:
- We have a
MainContent
component that takes a tab to render. This component might suspend while it loads the content for the current tab. During initial render, if this component suspends, we'll show theLoadingGlimmer
fallback from theSuspense
boundary that is wrapping it. - Additionally, in order to change tabs, we're keeping some state for the currently selected tab; when we set state to change the current tab, this will be an update that can cause the
MainContent
component to suspend again, since it may have to load the content for the new tab. Since this update may cause the component to suspend, we need to make sure to schedule it using thestartTransition
function we get fromuseTransition
. By doing so, we're letting React know that the update may suspend, so React can coordinate and render it at the right priority.
However, when we make these sorts of transitions, we ideally want to avoid "bad loading states", that is, loading states (e.g. a glimmer) that would replace content that has already been rendered on the screen. In this case for example, if we're already showing content for a tab, instead of immediately replacing the content with a glimmer, we might instead want to render some sort of "pending" or "busy" state to let the user know that we're changing tabs, and then render the new selected tab when it's hopefully mostly ready. In order to do so, this is where we need to take into account the different stages of a transition (pending → loading → complete), and make use of additional Suspense primitives, that allow us to control what we want to show at each stage.
The pending stage is the first state in a transition, and is usually rendered close to the element that initiated the action (e.g. a "busy spinner" next to a button); it should occur immediately (at a high priority), and be rendered quickly in order to give post to the user that their action has been registered. The loading state occurs when we actually start showing the new content or the next screen; this update is usually heavier it can take a little longer, so it doesn't need to be executed at the highest priority. During the loading state is where we'll show the fallbacks from our Suspense
boundaries (i.e. placeholders for the new content, like glimmers); some of the content might be partially rendered during this stage as async resources are loaded, so it can occur in multiple steps, until we finally reach the complete state, where the full content is rendered.
By default, when a suspense transition occurs, if the new content suspends, React will automatically transition to the loading state and show the fallbacks from any Suspense
boundaries that are in place for the new content. However, if we want to delay showing the loading state, and show a pending state instead, we can also use useTransition
to do so:
const {useTransition} = require('React');
const SUSPENSE_CONFIG = {
// timeoutMs allows us to delay showing the "loading" state for a while
// in favor of showing a "pending" state that we control locally
timeoutMs: 10 * 1000, // 10 seconds
};
function TabSwitcher() {
// isPending captures the "pending" state. It will become true
// **immediately** when the transition starts, and will be set back to false
// when the transition reaches the fully "completed" stage (i.e. when all the
// new content has fully loaded)
const [startTransition, isPending] = useTransition(SUSPENSE_CONFIG);
const [selectedTab, setSelectedTab] = useState('Home');
return (
<div>
<Suspense fallback={<LoadingGlimmer />}>
<MainContent tab={selectedTab} />
</Suspense>
<Button
onClick={() =>
startTransition(() => {
// Schedule an update that might suspend
setSelectedTab('Photos');
})
}
disabled={isPending}>
Show Photos
</Button>
</div>
);
}
NOTE: Providing a Suspense config to
useTransition
will only work as expected in React Concurrent Mode
Let's take a look at what's happening here:
- In this case, we're passing the
SUSPENSE_CONFIG
config object touseTransition
in order to configure how we want this transition to behave. Specifically, we can pass atimeoutMs
property in the config, which will dictate how long React should wait before transitioning to the "loading" state (i.e. transition to showing the fallbacks from theSuspense
boundaries), in favor of showing a pending state controlled locally by the component during that time. useTransition
will also return aisPending
boolean value, which captures the pending state. That is, this value will becometrue
immediately when the transition starts, and will becomefalse
when the transition reaches the fully "completed" stage, that is, when all the new content has been fully loaded. As mentioned above, the pending state should be used to give immediate post to the user that they're action has been received, and we can do so by using theisPending
value to control what we render; for example, we can use that value to render a spinner next to the button, or in this case, disable the button immediately after it is clicked.
For more details, check out the React docs on Suspense.
How We Use Suspense in Relay
Queries
In our case, our query renderer components are components that can suspend, so we use Suspense to render loading states while a query is being fetched. Let's see what that looks like in practice:
Say we have the following query renderer component:
/**
* MainContent.react.js
*
* Query Component
*/
const React = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
function MainContent() {
// **Fetch** and render a query
const data = useLazyLoadQuery<...>(
graphql`...`,
variables: {...},
);
return (...);
}
/**
* App.react.js
*/
const React = require('React');
const {Suspense} = require('React');
const LoadingSpinner = require('./LoadingSpinner.react');
const MainContent = require('./MainContent.react');
function App() {
return (
// LoadingSpinner is rendered via the Suspense fallback
<Suspense fallback={<LoadingSpinner />}>
<MainContent /> {/* MainContent may suspend */}
</Suspense>
);
}
Let's distill what's going on here:
- We have a
MainContent
component, which is a query renderer that fetches and renders a query.MainContent
will suspend rendering when it attempts to fetch the query, indicating that it isn't ready to be rendered yet, and it will resolve when the query is fetched. - The
Suspense
component that wrapsMainContent
will detect thatMainContent
suspended, and will render thefallback
element (i.e. theLoadingSpinner
in this case) up untilMainContent
is ready to be rendered; that is, up until the query is fetched.
Fragments
Fragments are also integrated with Suspense in order to support rendering of data that's partially available in the Relay Store. For more details, check out the Rendering Partially Cached Data section.
Transitions
Additionally, our APIs for refetching (Re-rendering with Different Data) and for Rendering Connections are also integrated with Suspense; for these use cases, we are initiating Suspense transitions after initial content has been rendered, such as by refetching or paginating, which means that these transitions should also use useTransition
. Check out those sections for more details.
Error States with Error Boundaries
As you may have noticed, we mentioned that using useLazyLoadQuery
will fetch a query from the server, but we didn't elaborate on how to render UI to show an error if an error occurred during fetch. We will cover that in this section.
We can use Error Boundary components to catch errors that occur during render (due to a network error, or any kind of error), and render an alternative error UI when that occurs. The way it works is similar to how Suspense
works, by wrapping a component tree in an error boundary, we can specify how we want to react when an error occurs, for example by rendering a fallback UI.
Error boundaries are simply components that implement the static getDerivedStateFromError
method:
const React = require('React');
type State = {|error: ?Error|};
class ErrorBoundary extends React.Component<Props, State> {
static getDerivedStateFromError(error): State {
// Set some state derived from the caught error
return {error: error};
}
}
Which we can use like so:
/**
* App.react.js
*/
const ErrorBoundary = require('ErrorBoundary');
const React = require('React');
const MainContent = require('./MainContent.react');
const SecondaryContent = require('./SecondaryContent.react');
function App() {
return (
// Render an ErrorSection if an error occurs within
// MainContent or Secondary Content
<ErrorBoundary fallback={error => <ErrorUI error={error} />}>
<MainContent />
<SecondaryContent />
</ErrorBoundary>
);
}
- We can use the Error Boundary to wrap subtrees and show a different UI when an error occurs within that subtree. When an error occurs, the specified
fallback
will be rendered instead of the content inside the boundary. - Note that we can also control the granularity at which we render error UIs, by wrapping components at different levels with error boundaries. In this example, if any error occurs within
MainContent
orSecondaryContent
, we will render anErrorSection
in place of the entire app content.
Retrying after an Error
In order to retry fetching a query after an error has occurred, we can attempt to re-mount the query component that produced an error:
/**
* ErrorBoundaryWithRetry.react.js
*/
const React = require('React');
type State = {|error: ?Error|};
// Sample ErrorBoundary that supports retrying to render the content
// that errored
class ErrorBoundaryWithRetry extends React.Component<Props, State> {
state = {error: null};
static getDerivedStateFromError(error): State {
return {error: error};
}
_retry = () => {
this.setState({error: null});
}
render() {
const {children, fallback} = this.props;
const {error} = this.state;
if (error) {
if (typeof fallback === 'function') {
return fallback(error, this._retry);
}
return fallback;
}
return children;
}
}
/**
* App.react.js
*/
const ErrorBoundary = require('ErrorBoundary');
const React = require('React');
const MainContent = require('./MainContent.react');
function App() {
return (
<ErrorBoundaryWithRetry
fallback={(error, retry) =>
<>
<ErrorUI error={error} />
{/* Render a button to retry; this will attempt to re-render the
content inside the boundary, i.e. the query component */}
<Button onPress={retry}>Retry</Button>
</>
}>
<MainContent />
</ErrorBoundaryWithRetry>
);
}
- The sample Error Boundary in this example code will provide a
retry
function to re-attempt to render the content that originally produced the error. By doing so, we will attempt to re-render our query component (that usesuseLazyLoadQuery
), and consequently attempt to fetch the query again.
Accessing errors in GraphQL Response
By default, Relay will only surface errors to React that are returned in the top-level errors field, if:
- the fetch function provided to the Relay Network throws or returns an Error.
- if the top-level
data
field wasn't returned in the response.
If you wish to access error information in your application to display user-friendly messages, the recommended approach is to model and expose the error information as part of your GraphQL schema.
For example, you could expose a field in your schema that returns either the expected result, or an Error object if an error occurred while resolving that field (instead of returning null):
type Error {
# User friendly message
message: String!
}
type Foo {
bar: Result | Error
}
Environment
Relay Environment Provider
In order to render Relay components, you need to render a RelayEnvironmentProvider
component at the root of the app:
// App root
const {RelayEnvironmentProvider} = require('react-relay/hooks');
function Root() {
return (
<RelayEnvironmentProvider environment={environment}>
{...}
</RelayEnvironmentProvider>
);
}
- The
RelayEnvironmentProvider
takes an environment, which it will make available to all descendant Relay components, and which is necessary for Relay to function.
Accessing the Relay Environment
If you want to access the current Relay Environment within a descendant of a RelayEnvironmentProvider
component, you can use the useRelayEnvironment
Hook:
const {useRelayEnvironment} = require('react-relay/hooks');
function UserComponent(props: Props) {
const environment = useRelayEnvironment();
return (...);
}
Reusing Cached Data for Render
While our app is in use, Relay will accumulate and cache (for some time) the data for the multiple queries that have been fetched throughout usage of our app. Often times, we'll want to be able to reuse and immediately render this data that is locally cached instead of waiting for a network request when fulfilling a query; this is what we'll cover in this section.
Some examples of when this might be useful are:
- Navigating between tabs in an app, where each app renders a query. If a tab has already been visited, re-visiting the tab should render it instantly, without having to wait for a network request to fetch the data that we've already fetched before.
- Navigating to a post that was previously rendered on a feed. If the post has already been rendered on a feed, navigating to the post's permalink page should render the post immediately, since all of the data for the post should already be cached.
- Even if rendering the post in the permalink page requires more data than rendering the post on a feed, we'd still like to reuse and immediately render as much of the post's data that we already have available locally, without blocking render for the entire post if only a small bit of data is missing.
Fetch Policies
The first step to reusing locally cached data is to specify a fetchPolicy
for useLazyLoadQuery
:
const React = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
function App() {
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
{id: '4'},
{fetchPolicy: 'store-or-network'},
);
return (
<h1>{data.user?.name}</h1>
);
}
The provided fetchPolicy
will determine:
- if the query should be fulfilled from the local cache, and
- if a network request should be made to fetch the query from the server, depending on the availablity of the data for that query in the store.
By default, Relay will try to read the query from the local cache; if any piece of data for that query is missing or stale, it will fetch the entire query from the network. This default fetchPolicy
is called "store-or-network".
Specifically, fetchPolicy
can be any of the following options:
- "store-or-network": (default) will reuse locally cached data and will only send a network request if any data for the query is missing or stale. If the query is fully cached, a network request will not be made.
- "store-and-network": will reuse locally cached data and will always send a network request, regardless of whether any data was missing or stale in the store.
- "network-only": will not reuse locally cached data, and will always send a network request to fetch the query, ignoring any data that might be locally cached and whether it's missing or stale.
- "store-only": will only reuse locally cached data, and will never send a network request to fetch the query. In this case, the responsibility of fetching the query falls to the caller, but this policy could also be used to read and operate on data that is entirely local.
Note that the refetch
function discussed in the Fetching More Data and Rendering Different Data section also takes a fetchPolicy
.
Availability of Cached Data
The behavior of the fetch policies described in the previous section will depend on the availability of the data in the Relay store at the moment we attempt to evaluate a query.
There are 2 main aspects that determine the availability of data, which we will go over in this section:
Presence of Data
An important thing to keep in mind when attempting to reuse data that is cached in the Relay store is to understand the lifetime of that data; that is, if it is present in the store, and for how long it will be.
Data in the Relay store for a given query will generally be present after the query has been fetched for the first time, as long as that query is being rendered on the screen. If we’ve never fetched data for a specific query, then it will be missing from the store.
However, even after we've fetched data for different queries, we can't keep all of the data that we've fetched indefinitely in memory, since over time it would grow to be too large and too stale. In order to mitigate this, Relay runs a process called Garbage Collection, in order to delete data that we're no longer using:
Garbage Collection in Relay
Specifically, Relay runs garbage collection on the local in-memory store by deleting any data that is no longer being referenced by any component in the app.
However, this can be at odds with reusing cached data; if the data is deleted too soon, before we try to reuse it again later, that will prevent us from reusing that data to render a screen without having to wait on a network request. To address this, this section will cover what you need to do in order to ensure that the data you want to reuse is kept cached for as long as you need it.
Query Retention
Retaining a query indicates to Relay that the data for that query and variables shouldn't be deleted (i.e. garbage collected). Multiple callers might retain a single query, and as long as there is at least one caller retaining a query, it won't be deleted from the store.
By default, any query components using useLazyLoadQuery or our other APIs will retain the query for as long as they are mounted. After they unmount, they will release the query, which means that the query might be deleted at any point in the future after that occurs.
If you need to retain a specific query outside of the components lifecycle, you can use the retain
operation:
// Retain query; this will prevent the data for this query and
// variables from being gabrage collected by Relay
const disposable = environment.retain(queryDescriptor);
// Disposing of the disposable will release the data for this query
// and variables, meaning that it can be deleted at any moment
// by Relay's garbage collection if it hasn't been retained elsewhere
disposable.dispose();
- As mentioned, this will allow you to retain the query even after a query component has unmounted, allowing other components, or future instances of the same component, to reuse the retained data.
Controlling Relay's Garbage Collection Policy
There are currently 2 options you can provide to your Relay Store in to control the behavior of garbage collection:
GC Scheduler
The gcScheduler
is a function you can provide to the Relay Store which will determine when a GC execution should be scheduled to run:
// Sample scheduler function
// Accepts a callback and schedules it to run at some future time.
function gcScheduler(run: () => void) {
resolveImmediate(run);
}
const store = new Store(source, {gcScheduler});
- By default, if a
gcScheduler
option is not provided, Relay will schedule garbage collection using theresolveImmediate
function. - You can provide a scheduler function to make GC scheduling less aggressive than the default, for example based on time or scheduler priorities, or any other heuristic. By convention, implementations should not execute the callback immediately.
GC Release Buffer Size
The Relay Store internally holds a release buffer to keep a specific (configurable) number of queries temporarily retained even after they have been released by their original owner (i.e., usually when a component rendering that query unmounts). This makes it possible (and more likely) to reuse data when navigating back to a page, tab or piece of content that has been visited before.
In order to configure the size of the release buffer, we can you can gcReleaseBufferSize
option to the Relay Store:
const store = new Store(source, {gcReleaseBufferSize: 10});
- Note that having a buffer size of 0 is equivalent to not having the release buffer, which means that queries will be immediately released and collected.
Staleness of Data
Assuming our data is present in the store, we still need to consider the staleness of such data.
By default, Relay will never consider data in the store to be stale (regardless of how long it has been cached for), unless it’s explicitly marked as stale using our data invalidation apis.
Marking data as stale is useful for cases when we explicitly know that some data is no longer fresh (for example after executing a Mutation), and we want to make sure it get’s refetched with the latest value from the server. Specifically, when data has been marked as stale, if any query references the stale data, that means the query will also be considered stale, and it will need to be fetched again the next time it is evaluated, given the provided Fetch Policy.
Relay exposes the following APIs to mark data as stale within an update to the store:
Globally Invalidating the Relay Store
The coarsest type of data invalidation we can perform is invalidating the whole store, meaning that all currently cached data will be considered stale after invalidation.
To invalidate the store, we can call invalidateStore()
within an updater function:
function updater(store) {
store.invalidateStore();
}
- Calling
invalidateStore()
will cause all data that was written to the store before invalidation occurred to be considered stale, and will require any query to be refetched again the next time it’s evaluated. - Note that an updater function can be specified as part of a mutation, subscription or just a local store update.
Invalidating Specific Data in the Store
We can also be more granular about which data we invalidate and only invalidate specific records in the store; compared to global invalidation, only queries that reference the invalidated records will be considered stale after invalidation.
To invalidate a record, we can call invalidateRecord()
within an updater function:
function updater(store) {
const user = store.get('<id>');
if (user != null) {
user.invalidateRecord();
}
}
- Calling
invalidateRecord()
on the user record will mark that specific user in the store as stale. That means that any query that is cached and references that invalidated user will now be considered stale, and will require to be refetched again the next time it’s evaluated. - Note that an updater function can be specified as part of a mutation, subscription or just a local store update.
Subscribing to Data Invalidation
Just marking the store or records as stale will cause queries to be refetched they next time they are evaluated; so for example, the next time you navigate back to a page that renders a stale query, the query will be refetched even if the data is cached, since the query references stale data.
This is useful for a lot of use cases, but there are some times when we’d like to immediately refetch some data upon invalidation, for example:
- When invalidating data that is already visible in the current page. Since no navigation is occurring, we won’t re-revaluate the queries for the current page, so even if some data is stale, it won't be immediately refetched and we will be showing stale data.
- When invalidating data that is rendered on a previous view that was never unmounted; since the view wasn't unmounted, if we navigate back, the queries for that view wont be re-evaluated, meaning that even if some is stale, it won't be refetched and we will be showing stale data.
To support these use cases, Relay exposes the useSubscribeToInvalidationState
hook:
function ProfilePage(props) {
// Example of querying data for the current page for a given user
const data = usePreloadedQuery(
graphql`...`,
props.preloadedQuery,
)
// Here we subscribe to changes in invalidation state for the given user ID.
// Whenever the user whith that ID is marked as stale, the provided callback will
// be executed*
useSubscribeToInvalidationState([props.userID], () => {
// Here we can do things like:
// - re-evaluate the query by passing a new preloadedQuery to usePreloadedQuery.
// - imperitavely refetch any data
// - render a loading spinner or gray out the page to indicate that refetch
// is happening.
})
return (...);
}
useSubscribeToInvalidationState
takes an array of ids, and a callback. Whenever any of the records for those ids are marked as stale, the provided callback will fire.- Inside the callback, we can react accordingly and refetch and/or update any current views that are rendering stale data. As an example, we could re-execute the top-level
usePreloadedQuery
by keeping thepreloadedQuery
in state and setting a new one here; since that query is stale at that point, the query will be refetched even if the data is cached in the store.
Rendering Partially Cached Data [HIGHLY EXPERIMENTAL]
NOTE: Partial rendering behavior is still highly experimental and likely to change, and only enabled under an experimental option. If you still wish to use it, you can enable it by passing
{UNSTABLE_renderPolicy: "partial"}
as an option touseLazyLoadQuery
.
Often times when dealing with cached data, we'd like the ability to perform partial rendering. We define "partial rendering" as the ability to immediately render a query that is partially cached. That is, parts of the query might be missing, but parts of the query might already be cached. In these cases, we want to be able to immediately render the parts of the query that are cached, without waiting on the full query to be fetched.
This can be useful in scenarios where we want to render a screen or a page as fast as possible, and we know that some of the data for that page is already cached, so we can skip a loading state. For example, imagine a user profile page: it is very likely that the user's name has already been cached at some point during usage of the app, so when visiting a profile page, if the name of the user is cached, we'd like to render immediately, even if the rest of the data for the profile page isn't available yet.
Fragments as boundaries for partial rendering
To do this, we rely on the ability of fragment containers to suspend. A fragment container will suspend if any of the data it declared locally is missing during render, and is currently being fetched. Specifically, it will suspend until the data it requires is fetched, that is, until the query it belongs to (its parent query) is fetched.
Let's explain what this means with an example. Say we have the following fragment component:
/**
* UsernameComponent.react.js
*
* Fragment Component
*/
import type {UsernameComponent_user$key} from 'UsernameComponent_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay/hooks');
type Props = {|
user: UsernameComponent_user$key,
|};
function UsernameComponent(props: Props) {
const user = useFragment(
graphql`
fragment UsernameComponent_user on User {
username
}
`,
props.user,
);
return (...);
}
module.exports = UsernameComponent;
And we have the following query component, which queries for some data, and also includes the fragment above:
/**
* App.react.js
*
* Query Component
*/
const React = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
const UsernameComponent = require('./UsernameComponent.react');
function App() {
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
...UsernameComponent_user
}
}
`,
{id: '4'},
{fetchPolicy: 'store-or-network'},
);
return (
<>
<h1>{data.user?.name}</h1>
<UsernameComponent user={data.user} />
</>
);
}
Say that when this App
component is rendered, we've already previously fetched (only) the name
for the User
with {id: 4}
, and it is locally cached in the Relay Store.
If we attempt to render the query with a fetchPolicy
that allows reusing locally cached data ('store-or-network'
, or 'store-and-network'
), the following will occur:
- The query will check if any of its locally required data is missing. In this case, it isn't. Specifically, the query is only directly querying for the
name
, and the name is available, so as far as the query is concerned, none of the data it requires to render itself is missing. This is important to keep in mind: when rendering a query, we eagerly read out data and render the tree, instead of blocking rendering of the entire tree until all of the data for the query (i.e. including nested fragments) is fetched. As we render, we will consider data to be missing for a component if the data it declared locally is missing, i.e. if any data required to render the current component is missing, and not if data for descendant components is missing. - Given that the query doesn't have any data missing, it will render, and then attempt to render the child
UsernameComponent
. - When the
UsernameComponent
attempts to render theUsernameComponent_user
fragment, it will notice that some of the data required to render itself is missing; specifically, theusername
is missing. At this point, sinceUsernameComponent
has missing data, it will suspend rendering until the network request completes. Note that regardless of whichfetchPolicy
you choose, a network request will always be started if any piece of data for the full query, i.e. including fragments, is missing.
At this point, when UsernameComponent
suspends due to the missing username
, ideally we should still be able to render the User
's **name**
immediately, since it's locally cached. However, since we aren't using a Suspense
component to catch the fragment's suspension, the suspension will bubble up and the entire App
component will be suspended.
In order to achieve the desired effect of rendering the name
when it's available even if the username
is missing, we just need to wrap the UsernameComponent
in Suspense,
to allow the other parts of App
to continue rendering:
/**
* App.react.js
*
* Query Component
*/
const React = require('React');
const {Suspense} = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
const UsernameComponent = require('./UsernameComponent.react');
function App() {
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
...UsernameComponent_user
}
}
`,
{id: '4'},
{fetchPolicy: 'store-or-network'},
);
return (
<>
<h1>{data.user?.name}</h1>
{/*
Wrap the UserComponent in Suspense to allow other parts of the
App to be rendered even if the username is missing.
*/}
<Suspense fallback={<LoadingSpinner label="Fetching username" />}>
<UsernameComponent user={data.user} />
</Suspense>
</>
);
}
The process that we described above works the same way for nested fragments (i.e. fragments that include other fragments). This means that if the data required to render a fragment is locally cached, the fragment component will be able to render, regardless of whether data for any of its child or descendant fragments is missing. If data for a child fragment is missing, we can wrap it in a Suspense
component to allow other fragments and parts of the app to continue rendering.
Filling in Missing Data (Missing Data Handlers)
In the previous section we covered how to reuse data that is fully or partially cached, however there are cases in which Relay can't automatically tell that it can reuse some of its local data to fulfill a query. Specifically, Relay knows how to reuse data that is cached for the same query; that is, if you fetch the exact same query twice, Relay will know that it has the data cached for that query the second time.
However, when using different queries, there might still be cases where different queries point to the same data, which we'd want to be able to reuse. For example, imagine the following two queries:
// Query 1
query UserQuery {
user(id: 4) {
name
}
}
// Query 2
query NodeQuery {
node(id: 4) {
... on User {
name
}
}
}
These two queries are different, but reference the exact same data. Ideally, if one of the queries was already cached in the store, we should be able to reuse that data when rendering the other query. However, Relay doesn't have this knowledge by default, so we need to configure it to encode the knowledge that a node(id: 4)
"is also a" user(id: 4)
.
To do so, we can provide missingFieldHandlers
to the RelayEnvironment
, which specify this knowledge:
const {ROOT_TYPE, Environment} = require('react-relay');
const missingFieldHandlers = [
{
handle(field, record, argValues): ?string {
if (
record != null &&
record.__typename === ROOT_TYPE &&
field.name === 'user' &&
argValues.hasOwnProperty('id')
) {
// If field is user(id: $id), look up the record by the value of $id
return argValues.id;
}
if (
record != null &&
record.__typename === ROOT_TYPE &&
field.name === 'story' &&
argValues.hasOwnProperty('story_id')
) {
// If field is story(story_id: $story_id), look up the record by the
// value of $story_id.
return argValues.story_id;
}
return null;
},
kind: 'linked',
},
];
const environment = new Environment({/*...*/, missingFieldHandlers});
missingFieldHandlers
is an array of handlers. Each handler must specify ahandle
function, and the kind of missing fields it knows how to handle. The 2 main types of fields that you'd want to handle are:- 'scalar': This represents a field that contains a scalar value, for example a number or a string.
- 'linked': This represents a field that references another object, i.e. not a scalar.
- The
handle
function takes the field that is missing, the record that field belongs to, and any arguments that were passed to the field in the current execution of the query.- When handling a 'scalar' field, the handle function should return a scalar value, in order to use as the value for a missing field
- When handling a 'linked' field, the handle function should return an ID, referencing another object in the store that should be use in place of the missing field.
- As Relay attempts to fulfill a query from the local cache, whenever it detects any missing data, it will run any of the provided missing field handlers that match the field type before definitively declaring that the data is missing.
Fetching Rendering Different Data
After an app has been initially rendered, there are various scenarios in which you might want to fetch and render more data, re-render your UI with different data, or maybe refresh existing data, usually as a result of an event or user interaction.
In this section we'll cover some of the most common scenarios and how to build them with Relay.
Refreshing Rendered Data
Assuming you're not using real-time updates to update your data (e.g. using GraphQL Subscriptions), often times you'll want to refetch the same data you've already rendered, in order to get the latest version available on the server. This is what we'll cover in this section.
Refreshing Queries
To refresh a query, you can use the fetchQuery
function described in our Fetching Queries section. Specifically, you can call fetchQuery
inside the component with the exact same query and variables. Given that the query component is subscribed to any changes in its own data, when the request completes, the component will automatically update and re-render with the latest data:
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {graphql, useLazyLoadQuery, useRelayEnvironment, fetchQuery} = require('react-relay/hooks');
function App() {
const environment = useRelayEnvironment();
const variables = {id: '4'};
const appQuery = graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
friends {
count
}
}
}
`;
const refresh = () => {
fetchQuery(
environment,
appQuery,
variables,
)
.toPromise();
};
const data = useLazyLoadQuery<AppQuery>(appQuery, variables);
return (
<>
<h1>{data.user?.name}</h1>
<div>Friends count: {data.user.friends?.count}</div>
<Button onClick={() => refresh()}>Fetch latest count</Button>
</>
);
}
If you want to know whether the request is in flight, in order to show a busy indicator or disable a UI control, you can subscribe to the observable returned by fetchQuery
, and keep state in your component:
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {useState} = require('React');
const {graphql, useLazyLoadQuery, useRelayEnvironment, fetchQuery} = require('react-relay/hooks');
function App() {
const environment = useRelayEnvironment();
const variables = {id: '4'};
const appQuery = graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
friends {
count
}
}
}
`;
const [isRefreshing, setIsRefreshing] = useState(false);
const refresh = () => {
fetchQuery(
environment,
appQuery,
variables,
)
.subscribe({
start: () => setIsRefreshing(true),
complete: () => setIsRefreshing(false),
});
};
const data = useLazyLoadQuery<AppQuery>(appQuery, variables);
return (
<>
<h1>{data.user?.name}</h1>
<div>Friends count: {data.user.friends?.count}</div>
<Button
disabled={isRefreshing}
onClick={() => refetch()}>
Fetch latest count {isRefreshing ? <LoadingSpinner /> : null}
</Button>
</>
);
}
Refreshing Fragments
In order to refresh the data for a fragment, we can also use fetchQuery
, but we need to provide a query to refetch the fragment under; remember, fragments can't be fetched by themselves: they need to be part of a query, so we can't just "fetch" the fragment again by itself.
However, we don't need to manually write the query; instead, we can use the @refetchable
directive, which will make it so Relay automatically generates a query to fetch the fragment when the compiler is run:
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
const React = require('React');
const {graphql, useFragment, useRelayEnvironment} = require('react-relay/hooks');
// This query is autogenerated by Relay given @refetchable used below
const UserComponentUserRefreshQuery = require('UserComponentUserRefreshQuery.graphql');
type Props = {|
user: UserComponent_user$key,
|};
function UserComponent(props: Props) {
const environment = useRelayEnvironment();
const data = useFragment(
graphql`
fragment UserComponent_user on User
# @refetchable makes it so Relay autogenerates a query for
# fetching this fragment
@refetchable(queryName: "UserComponentUserRefreshQuery") {
id
name
friends {
count
}
}
`,
props.user,
);
const refresh = () => {
fetchQuery(
environment,
UserComponentUserRefreshQuery,
{id: data.id},
)
.toPromise();
};
return (
<>
<h1>{data.name}</h1>
<div>Friends count: {data.friends?.count}</div>
<Button onClick={() => refresh()}>Fetch latest count</Button>
</>
);
}
module.exports = UserComponent;
- Relay will autogenerate a query by adding the
@refetchable
directive to our fragment, and we can import it and pass it tofetchQuery
. Note that@refetchable
directive can only be added to fragments that are "refetchable", that is, on fragments that are onViewer
, or onQuery
, or on a type that implementsNode
(i.e. a type that has anid
field). - In order to fetch the query, we need to know the
id
of the user since it will be a required query variable in the generated query. To do so, we simply include theid
in our fragment. - Given that the fragment container component is subscribed to any changes in its own data, when the request completes, the component will automatically update and re-render with the latest data:
- If you want to know whether the request is in flight, in order to show a busy indicator or disable a UI control, you can use provide an
observer
tofetchQuery
, and keep state in your component.
Re-rendering with Different Data
Often times you'll want to re-render your existing query or fragment components, but using different data than the one they were originally rendered with. This usually means fetching your existing queries or fragments with different variables.
Some examples of when you might want to do this:
- You've rendered a comment, and after user interaction want to fetch and re-render the comment body with the text translated to a different language.
- You've rendered a profile picture, and you want to fetch and re-render it with a different size or scale.
- You've rendered a list of search results, and you want to fetch and re-render the list with a new search term upon user input.
Re-rendering queries with different data
As mentioned in the Queries section, passing different query variables than the ones originally passed when using useLazyLoadQuery
will cause the query to be fetched with the new variables, and re-render your component with the new data:
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {useState, useTransition} = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
function App() {
const [startTransition] = useTransition();
const [variables, setVariables] = useState({id: '4'});
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
variables,
);
return (
<>
<h1>
{data.user?.name}
<Button
onClick={() => {
startTransition(() => {
setVariables({id: 'different-id'});
});
}}>
Fetch different User
</Button>
</h1>
</>
);
}
Let's distill what's going on here:
- Calling
setVariables
and passing a new set of variables will re-render the component and cause the query to be fetched again with the newly provided variables. In this case, we will fetch theUser
with id'different-id'
, and render the results when they're available. - This will re-render your component and may cause it to suspend (as explained in (Transitions And Updates That Suspend) if it needs to send and wait for a network request. If
setVariables
causes the component to suspend, you'll need to make sure that there's aSuspense
boundary wrapping this component from above, and/or that you are usinguseTransition
with a Suspense config in order to show the appropriate pending or loading state.- Note that since
setVariables
may cause the component to suspend, regardless of whether we're using a Suspense config to render a pending state, we should always usestartTransition
to schedule that update; any update that may cause a component to suspend should be scheduled using this pattern.
- Note that since
You can also provide a different fetchPolicy
when refetching the query in order to specify whether to use locally cached data (as we covered in Reusing Cached Data for Render):
import type {AppQuery} from 'AppQuery.graphql';
const React = require('React');
const {useState, useTransition} = require('React');
const {graphql, useLazyLoadQuery} = require('react-relay/hooks');
function App() {
const [startTransition] = useTransition();
const [state, setState] = useState({
fetchPolicy: 'store-or-network',
variables: {id: '4'},
});
const data = useLazyLoadQuery<AppQuery>(
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
variables,
{fetchPolicy},
);
return (
<>
<h1>
{data.user?.name}
<Button
onClick={() => {
startTransition(() => {
setState({
fetchPolicy: 'network-only',
variables: {id: 'different-id'},
});
});
}}>
Fetch different User
</Button>
</h1>
</>
);
}
- In this case, we're keeping both the
fetchPolicy
andvariables
in component state in order to trigger a refetch both with differentvariables
and a differentfetchPolicy
.
Re-rendering Fragments with Different Data
Sometimes, upon an event or user interaction, we'd like to render the same exact fragment that was originally rendered under the initial query, but with a different data. Conceptually, this means fetching and rendering the currently rendered fragment again, but under a new query with different variables; or in other words, making the rendered fragment a new query root. Remember that fragments can't be fetched by themselves: they need to be part of a query, so we can't just "fetch" the fragment again by itself.
To do so, you can use the useRefetchableFragment
hook, in order to refetch a fragment under new query and variables, using the refetch
function:
import type {CommentBodyRefetchQuery} from 'CommentBodyRefetchQuery.graphql';
import type {CommentBody_comment$key} from 'CommentBody_comment.graphql';
const React = require('React');
const {useTransition} = require('React')
const {graphql, useRefetchableFragment} = require('react-relay/hooks');
type Props = {|
comment: CommentBody_comment$key,
|};
function CommentBody(props: Props) {
const [startTransition] = useTransition();
const [data, refetch] = useRefetchableFragment<CommentBodyRefetchQuery, _>(
graphql`
fragment CommentBody_comment on Comment
@refetchable(queryName: "CommentBodyRefetchQuery") {
body(lang: $lang) {
text
}
}
`,
props.comment,
);
return (
<>
<p>{data.body?.text}</p>
<Button
onClick={() => {
startTransition(() => {
refetch({lang: 'SPANISH'}, {fetchPolicy: 'store-or-network'});
});
}}>
Translate Comment
</Button>
</>
);
}
module.exports = CommentBody;
Let's distill what's happening in this example:
useRefetchableFragment
behaves the same way as auseFragment
(Fragments), but with a few additions:- It expects a fragment that is annotated with the
@refetchable
directive. Note that@refetchable
directive can only be added to fragments that are "refetchable", that is, on fragments that are onViewer
, or onQuery
, or on a type that implementsNode
(i.e. a type that has anid
field). - It returns a
refetch
function, which is already Flow typed to expect the query variables that the generated query expects - It takes to Flow type parameters: the type of the generated query (in our case
CommentBodyRefetchQuery
), and a second type which can always be inferred, so you only need to pass underscore (_
).
- It expects a fragment that is annotated with the
- Calling
refetch
and passing a new set of variables will fetch the fragment again with the newly provided variables. The variables you need to provide are a subset of the variables that the generated query expects; the generated query will require anid
, if the type of the fragment has anid
field, and any other variables that are transitively referenced in your fragment.- In this case we're passing the current comment
id
and a new value for thetranslationType
variable to fetch the translated comment body.
- In this case we're passing the current comment
- This will re-render your component and may cause it to suspend (as explained in Transitions And Updates That Suspend) if it needs to send and wait for a network request. If
refetch
causes the component to suspend, you'll need to make sure that there's aSuspense
boundary wrapping this component from above, and/or that you are usinguseTransition
with a Suspense config in order to show the appropriate pending state.- Note that since
refetch
may cause the component to suspend, regardless of whether we're using a Suspense config to render a pending state, we should always usestartTransition
to schedule that update; any update that may cause a component to suspend should be scheduled using this pattern.
- Note that since
Rendering List Data and Pagination
There are several scenarios in which we'll want to query a list of data from the GraphQL server. Often times we wont want to query the entire set of data up front, but rather discrete sub-parts of the list, incrementally, usually in response to user input or other events. Querying a list of data in discrete parts is usually known as Pagination.
Connections
Specifically in Relay, we do this via GraphQL fields known as Connections. Connections are GraphQL fields that take a set of arguments to specify which "slice" of the list to query, and include in their response both the "slice" of the list that was requested, as well as information to indicate if there is more data available in the list and how to query it; this additional information can be used in order to perform pagination by querying for more "slices" or pages on the list.
More specifically, we perform cursor-based pagination, in which the input used to query for "slices" of the list is a cursor
and a count
. Cursors are essentially opaque tokens that serve as markers or pointers to a position in the list. If you're curious to learn more about the details of cursor-based pagination and connections, check out this spec.
Rendering Connections
In Relay, in order to perform pagination, first you need to declare a fragment that queries for a connection:
const {graphql} = require('react-relay');
const userFragment = graphql`
fragment UserFragment on User {
name
friends(after: $cursor, first: $count)
@connection(key: "UserFragment_friends") {
edges {
node {
...FriendComponent
}
}
}
}
`;
- In the example above, we're querying for the
friends
field, which is a connection; in other words, it adheres to the connection spec. Specifically, we can query theedges
andnode
s in the connection; theedges
usually contain information about the relationship between the entities, while thenode
s are the actual entities at the other end of the relationship; in this case, thenode
s are objects of typeUser
representing the user's friends. - In order to indicate to Relay that we want to perform pagination over this connection, we need to mark the field with the
@connection
directive. We must also provide a static unique identifier for this connection, known as thekey
. We recommend the following naming convention for the connection key:<fragment_name>_<field_name>
. - We will go into more detail later as to why it is necessary to mark the field as a
@connection
and give it a uniquekey
in our Adding and Removing Items From a Connection section.
In order to render this fragment which queries for a connection, we can use the usePaginationFragment
Hook:
import type {FriendsListPaginationQuery} from 'FriendsListPaginationQuery.graphql';
import type {FriendsListComponent_user$key} from 'FriendsList_user.graphql';
const React = require('React');
const {Suspense, SuspenseList} = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const {data} = usePaginationFragment<FriendsListPaginationQuery, _>(
graphql`
fragment FriendsListComponent_user on User
@refetchable(queryName: "FriendsListPaginationQuery") {
name
friends(first: $count, after: $cursor)
@connection(key: "FriendsList_user_friends") {
edges {
node {
...FriendComponent
}
}
}
}
`,
props.user,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<SuspenseList revealOrder="forwards">
{/* Extract each friend from the resulting data */}
{(data.friends?.edges ?? []).map(edge => {
const node = edge.node;
return (
<Suspense fallback={<Glimmer />}>
<FriendComponent user={node} />
</Suspense>
);
})}
</SuspenseList>
</>
);
}
module.exports = FriendsListComponent;
usePaginationFragment
behaves the same way as auseFragment
(Fragments), so our list of friends is available underdata.friends.edges.node
, as declared by the fragment. However, it also has a few additions:- It expects a fragment that is a connection field annotated with the
@connection
directive - It expects a fragment that is annotated with the
@refetchable
directive. Note that@refetchable
directive can only be added to fragments that are "refetchable", that is, on fragments that are onViewer
, or onQuery
, or on a type that implementsNode
(i.e. a type that has anid
field). - It takes to Flow type parameters: the type of the generated query (in our case
FriendsListPaginationQuery
), and a second type which can always be inferred, so you only need to pass underscore (_
).
- It expects a fragment that is a connection field annotated with the
- Note that we're using
[SuspenseList](https://reactjs.org/docs/concurrent-mode-patterns.html#suspenselist)
to render the items: this will ensure that the list is rendered in order from top to bottom even if individual items in the list suspend and resolve at different times; that is, it will prevent items from rendering out of order, which prevents content from jumping around after it has been rendered.
Pagination
To actually perform pagination over the connection, we need use the loadNext
function to fetch the next page of items, which is available from usePaginationFragment
:
import type {FriendsListPaginationQuery} from 'FriendsListPaginationQuery.graphql';
import type {FriendsListComponent_user$key} from 'FriendsList_user.graphql';
const React = require('React');
const {Suspense, SuspenseList, useTransition} = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const [startTransition] = useTransition();
const {data, loadNext} = usePaginationFragment<FriendsListPaginationQuery, _>(
graphql`
fragment FriendsListComponent_user on User
@refetchable(queryName: "FriendsListPaginationQuery") {
name
friends(first: $count, after: $cursor)
@connection(key: "FriendsList_user_friends") {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<SuspenseList revealOrder="forwards">
{(data.friends?.edges ?? []).map(edge => {
const node = edge.node;
return (
<Suspense fallback={<Glimmer />}>
<FriendComponent user={node} />
</Suspense>
);
})}
</SuspenseList>
<Button
onClick={() => {
startTransition(() => {
loadNext(10)
});
}}>
Load more friends
</Button>
</>
);
}
module.exports = FriendsListComponent;
Let's distill what's happening here:
loadNext
takes a count to specify how many more items in the connection to fetch from the server. In this case, whenloadNext
is called we'll fetch the next 10 friends in the friends list of our currently renderedUser
.- When the request to fetch the next items completes, the connection will be automatically updated and the component will re-render with the latest items in the connection. In our case, this means that the
friends
field will always contain all of the friends that we've fetched so far. By default, Relay will automatically append new items to the connection upon completing a pagination request, and will make them available to your fragment component. If you need a different behavior, check out our Advanced Pagination Use Cases section. loadNext
may cause the component or new children components to suspend (as explained in Transitions And Updates That Suspend). This means that you'll need to make sure that there's aSuspense
boundary wrapping this component from above, and/or that you are usinguseTransition
with a Suspense config in order to show the appropriate pending or loading state.- Note that since
loadNext
may cause the component to suspend, regardless of whether we're using a Suspense config to render a pending state, we should always usestartTransition
to schedule that update; any update that may cause a component to suspend should be scheduled using this pattern.
- Note that since
Often, you will also want to access information about whether there are more items available to load. To do this, you can use the hasNext
value, also available from usePaginationFragment
:
import type {FriendsListPaginationQuery} from 'FriendsListPaginationQuery.graphql';
import type {FriendsListComponent_user$key} from 'FriendsList_user.graphql';
const React = require('React');
const {Suspense, SuspenseList, useTransition} = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const [startTransition] = useTransition();
const {
data,
loadNext,
hasNext,
} = `usePaginationFragment``<``FriendsListPaginationQuery``,`` _``>`(
graphql`
fragment FriendsListComponent_user on User
@refetchable(queryName: "FriendsListPaginationQuery") {
name
friends(first: $count, after: $cursor)
@connection(key: "FriendsList_user_friends") {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<SuspenseList revealOrder="forwards">
{(data.friends?.edges ?? []).map(edge => {
const node = edge.node;
return (
<Suspense fallback={<Glimmer />}>
<FriendComponent user={node} />
</Suspense>
);
})}
</SuspenseList>
{/* Only render button if there are more friends to load in the list */}
{hasNext ? (
<Button
onClick={() => {
startTransition(() => {
loadNext(10)
});
}}>
Load more friends
</Button>
) : null}
</>
);
}
module.exports = FriendsListComponent;
hasNext
is a boolean which indicates if the connection has more items available. This information can be useful for determining if different UI controls should be rendered. In our specific case, we only render theButton
if there are more friends available in the connection .
Blocking ("all-at-once") Pagination
So far when we've talked about pagination, we haven't specified how we want pagination to behave when we're rendering the new items we've fetched. Since the new items that we're fetching and rendering might individually suspend due to their own asynchronous dependencies (Loading States with Suspense), we need to be able to specify what kind of behavior we want to have as we render them.
Usually, we've identified that this will fall under one of these 2 categories:
- "One by one" (or "stream-y") pagination: Regardless of whether we're actually streaming at the data layer, conceptually this type of pagination is where we want to render items one by one, in order, as they become available. In this use case, we usually want to show some sort of loading placeholder for the new items (either in aggregate or for each individual item) as they are loaded in. This should not exclude the possibility of also having a separate pending or busy state (like a spinner next to the button that started the action). This is generally the default pagination behavior that we'll want, which applies to most lists and feeds.
- "All at once" pagination: This type of pagination is where we want to load and render the entire next page of items all at once, in a single paint; that is, we want to render the next page of items only when all of the items are ready (including when individual items suspend). Unlike the previous case, in this case, we do not want to show individual placeholders for the new items in the list, but instead we want to immediately show a pending or busy state, such as a spinner next (or close) to the element that started the action (like a button); this pending spinner should continue "spinning" until the entire next page of items are fully loaded and rendered. The best example of this type of use case is pagination when loading new comments in a list of comments.
So far in the previous pagination sections, we've implicitly been referring to the "one by one" pagination case when describe using usePaginationFragment
+ SuspenseList
to render lists and show loading placeholders.
However, if we want to implement "all at once" pagination, we need to use a different API, useBlockingPaginationFragment
:
import type {FriendsListPaginationQuery} from 'FriendsListPaginationQuery.graphql';
import type {FriendsListComponent_user$key} from 'FriendsList_user.graphql';
const React = require('React');
const {useTransition, Suspense, SuspenseList} = require('React');
const {graphql, useBlockingPaginationFragment} = require('react-relay/hooks');
type Props = {|
user: FriendsListComponent_user$key,
|};
const SUSPENSE_CONFIG = {
// timeoutMs allows us to delay showing the "loading" state for a while
// in favor of showing a "pending" state that we control locally
timeoutMs: 30 * 1000,
};
function FriendsListComponent(props: Props) {
// isPending captures the "pending" state. It will become true
// **immediately** when the pagination transition starts, and will be set back
// to false when the transition reaches the fully "completed" stage
// (i.e. when all the new items in the list have fully loaded and rendered)
const [startTransition, isPending] = useTransition(SUSPENSE_CONFIG);
const {
data,
loadNext,
hasNext,
} = useBlockingPaginationFragment<FriendsListPaginationQuery, _>(
graphql`
fragment FriendsListComponent_user on User
@refetchable(queryName: "FriendsListPaginationQuery") {
name
friends(first: $count, after: $cursor)
@connection(key: "FriendsList_user_friends") {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<SuspenseList revealOrder="forwards">
{(data.friends?.edges ?? []).map(edge => {
const node = edge.node;
return (
<Suspense fallback={<Glimmer />}>
<FriendComponent user={node} />
</Suspense>
);
})}
</SuspenseList>
{/* Render a Spinner next to the button immediately, while transition is pending */}
{isPending ? <Spinner /> : null}
{hasNext ? (
<Button
{/* Disbale the button immediately, while transition is pending */}
disabled={isPending}
onClick={() => {
startTransition(() => {
loadNext(10)
});
}}>
Load more friends
</Button>
) : null}
</>
);
}
module.exports = FriendsListComponent;
Let's distill what's going on here:
loadNext
will cause the component to suspend, so we need to wrap it instartTransition
, as explained in Transitions And Updates That Suspend).- Also similarly to the case described in Transitions And Updates That Suspend, we're passing the
SUSPENSE_CONFIG
config object touseTransition
in order to configure how we want this transition to behave. Specifically, we can pass atimeoutMs
property in the config, which will dictate how long React should wait before transitioning to the "loading" state (i.e. transition to showing the loading placeholders for the new items), in favor of showing a "pending" state controlled locally by the component during that time. useTransition
will also return aisPending
boolean value, which captures the pending state. That is, this value will becometrue
immediately when the pagination transition starts, and will becomefalse
when the transition reaches the fully "completed" stage, that is, when all the new items have been fully loaded, including their own asynchronous dependencies that would cause them to suspend. We can use theisPending
value to show immediate post to the user action, in this case by rendering a spinner next to the button and disabling the button. In this case, the spinner will be rendered and the button will be disabled until all the new items in the list have been fully loaded and rendered.
Using and Changing Filters
Often times when querying for a list of data, you can provide different values in the query which serve as filters that change the result set, or sort it differently.
Some examples of this are:
- Building a search typeahead, where the list of results is a list filtered by the search term entered by the user.
- Changing the ordering mode of the list comments currently displayed for a post, which could produce a completely different set of comments from the server.
- Changing the way News Feed is ranked and sorted.
Specifically, in GraphQL, connection fields can accept arguments to sort or filter the set of queried results:
fragment UserFragment on User {
name
friends(order_by: DATE_ADDED, search_term: "Alice", first: 10) {
edges {
node {
name
age
}
}
}
}
In Relay, we can pass those arguments as usual using GraphQL Variables.
type Props = {|
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const userRef = props.userRef;
const {data, ...} = usePaginationFragment(
graphql`
fragment FriendsListComponent_user on User {
name
friends(
order_by: $orderBy,
search_term: $searchTerm,
after: $cursor,
first: $count,
) @connection(key: "FriendsListComponent_user_friends_connection") {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
return (...);
}
When paginating, the original values for those filters will be preserved:
type Props = {|
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const userRef = props.userRef;
const {data, loadNext} = usePaginationFragment(
graphql`
fragment FriendsListComponent_user on User {
name
friends(order_by: $orderBy, search_term: $searchTerm)
@connection(key: "FriendsListComponent_user_friends_connection") {
edges {
node {
name
age
}
}
}
}
`,
userRef,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<List items={data.friends?.nodes}>{...}</List>
/*
Loading the next items will use the original order_by and search_term
values used for the initial query
*/
<Button onClick={() => loadNext(10)}>Load more friends</Button>
</>
);
}
- Note that calling
loadNext
will use the originalorder_by
andsearch_term
values used for the initial query. During pagination, these value won't (and shouldn't) change.
If we want to refetch the connection with different variables, we can use the refetch
function provided by usePaginationFragment
, similarly to how we do so when Re-rendering Fragments With Different Data:
/**
* FriendsListComponent.react.js
*/
import type {FriendsListComponent_user$ref} from 'FriendsListComponent_user.graphql';
const React = require('React');
const {useState, useEffect, useTransition, SuspenseList} = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
searchTerm?: string,
user: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const searchTerm = props.searchTerm;
const [startTransition] = useTransition();
const {data, loadNext, refetch} = usePaginationFragment(
graphql`
fragment FriendsListComponent_user on User {
name
friends(
order_by: $orderBy,
search_term: $searchTerm,
after: $cursor,
first: $count,
) @connection(key: "FriendsListComponent_user_friends_connection") {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
useEffect(() => {
// When the searchTerm provided via props changes, refetch the connection
// with the new searchTerm
startTransition(() => {
refetch({first: 10, search_term: searchTerm}, {fetchPolicy: 'store-or-network'});
});
}, [searchTerm]);
return (
<>
<h1>Friends of {data.name}:</h1>
{/* When the button is clicked, refetch the connection but sorted differently */}
<Button
onClick={() =>
startTransition(() => {
refetch({first: 10, orderBy: 'DATE_ADDED'});
})
}>
Sort by date added
</Button>
<SuspenseList>...</SuspenseList>
<Button onClick={() => loadNext(10)}>Load more friends</Button>
</>
);
}
Let's distill what's going on here:
- Calling
refetch
and passing a new set of variables will fetch the fragment again with the newly-provided variables. The variables you need to provide are a subset of the variables that the generated query expects; the generated query will require anid
, if the type of the fragment has anid
field, and any other variables that are transitively referenced in your fragment.- In our case, we need to pass the count we want to fetch as the
first
variable, and we can pass different values for our filters, likeorderBy
orsearchTerm
.
- In our case, we need to pass the count we want to fetch as the
- This will re-render your component and may cause it to suspend (as explained in Transitions And Updates That Suspend) if it needs to send and wait for a network request. If
refetch
causes the component to suspend, you'll need to make sure that there's aSuspense
boundary wrapping this component from above, and/or that you are usinguseTransition
with a Suspense config in order to show the appropriate pending or loading state.- Note that since
refetch
may cause the component to suspend, regardless of whether we're using a Suspense config to render a pending state, we should always usestartTransition
to schedule that update; any update that may cause a component to suspend should be scheduled using this pattern.
- Note that since
- Conceptually, when we call refetch, we're fetching the connection from scratch. It other words, we're fetching it again from the beginning and "resetting" our pagination state. For example, if we fetch the connection with a different
search_term
, our pagination information for the previoussearch_term
no longer makes sense, since we're essentially paginating over a new list of items.
Adding and Removing Items From a Connection
Usually when you're rendering a connection, you'll also want to be able to add or remove items to/from the connection in response to user actions.
As explained in our Updating Data section, Relay holds a local in-memory store of normalized GraphQL data, where records are stored by their IDs. When creating mutations, subscriptions, or local data updates with Relay, you must provide an updater
function, inside which you can access and read records, as well as write and make updates to them. When records are updated, any components affected by the updated data will be notified and re-rendered.
Connection Records
In Relay, connection fields that are marked with the @connection
directive are stored as special records in the store, and they hold and accumulate all of the items that have been fetched for the connection so far. In order to add or remove items from a connection, we need to access the connection record using the connection key
, which was provided when declaring a @connection
; specifically, this allows us to access a connection inside an updater
function using the ConnectionHandler
APIs.
For example, given the following fragment that declares a @connection
:
const {graphql} = require('react-relay');
const storyFragment = graphql`
fragment StoryComponent_story on Story {
comments @connection(key: "StoryComponent_story_comments_connection") {
nodes {
body {
text
}
}
}
}
`;
We can access the connection record inside an updater
function using ConnectionHandler.getConnection
:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
const connectionRecord = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
);
// ...
}
Adding Edges
Once we have a connection record, we also need a record for the new edge that we want to add to the connection. Usually, mutation or subscription payloads will contain the new edge that was added; if not, you can also construct a new edge from scratch.
For example, in the following mutation we can query for the newly created edge in the mutation response:
const {graphql} = require('react-relay');
const createCommentMutation = graphql`
mutation CreateCommentMutation($input: CommentCreateData!) {
comment_create(input: $input) {
comment_edge {
cursor
node {
body {
text
}
}
}
}
}
`;
- Note that we also query for the
cursor
for the new edge; this isn't strictly necessary, but it is information that will be required if we need to perform pagination based on thatcursor
.
Inside an updater
, we can access the edge inside the mutation response using Relay store APIs:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
const connectionRecord = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
);
// Get the payload returned from the server
const payload = store.getRootField('comment_create');
// Get the edge inside the payload
const serverEdge = payload.getLinkedRecord('comment_edge');
// Build edge for adding to the connection
const newEdge = ConnectionHandler.buildConnectionEdge(
store,
connectionRecord,
serverEdge,
);
// ...
}
- The mutation payload is available as a root field on that store, which can be read using the
store.getRootField
API. In our case, we're readingcomment_create
, which is the root field in the response. - Note that we need to construct the new edge from the edge received from the server using
ConnectionHandler.buildConnectionEdge
before we can add it to the connection.
If you need to create a new edge from scratch, you can use ConnectionHandler.createEdge
:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
const connectionRecord = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
);
// Create a new local Comment record
const id = `client:new_comment:${randomID()}`;
const newCommentRecord = store.create(id, 'Comment');
// Create new edge
const newEdge = ConnectionHandler.createEdge(
store,
connectionRecord,
newCommentRecord,
'CommentEdge', /* GraphQl Type for edge */
);
// ...
}
Once we have a new edge record, we can add it to the the connection using ConnectionHandler.insertEdgeAfter
or ConnectionHandler.insertEdgeBefore
:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
const connectionRecord = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
);
const newEdge = (...);
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(
connectionRecord,
newEdge,
);
// Add edge to the beginning of the connection
ConnectionHandler.insertEdgeBefore(
connectionRecord,
newEdge,
);
}
- Note that these APIs will mutate the connection in-place.
Removing Edges
ConnectionHandler
provides a similar API to remove an edge from a connection, via ConnectionHandler.deleteNode
:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
const connectionRecord = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
);
// Remove edge from the connection, given the ID of the node
ConnectionHandler.deleteNode(
connectionRecord,
commentIDToDelete,
);
}
- In this case
ConnectionHandler.deleteNode
will remove an edge given anode
ID. This means it will look up which edge in the connection contains a node with the provided ID, and remove that edge. - Note that this API will mutate the connection in-place.
Remember: When performing any of the operations described here to mutate a connection, any fragment or query components that are rendering the affected connection will be notified and re-render with the latest version of the connection.
You can also check out our complete Relay Store APIs here
Connection Identity With Filters
In our previous examples, our connections didn't take any arguments as filters. If you declared a connection that takes arguments as filters, the values used for the filters will be part of the connection identifier. In other words, each of the values passed in as connection filters will be used to identify the connection in the Relay store, however, excluding pagination arguments; i.e. excluding: first:
, last:
, before:
, and after:
.
For example, let's say the comments
field took the following arguments, which we pass in as GraphQL Variables:
const {graphql} = require('react-relay');
const storyFragment = graphql`
fragment StoryComponent_story on Story {
comments(
order_by: $orderBy,
filter_mode: $filterMode,
language: $language,
) @connection(key: "StoryComponent_story_comments_connection") {
edges {
nodes {
body {
text
}
}
}
}
}
`;
In the example above, this means that whatever values we used for $orderBy
, $filterMode
and $language
when we queried for the comments
field will be part of the connection identifier, and we'll need to use those values when accessing the connection record from the Relay store.
In order to do so, we need to pass a third argument to ConnectionHandler.getConnection
, with concrete filter values to identify the connection:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
// Get the connection instance for the connection with comments sorted
// by the date they were added
const connectionRecordSortedByDate = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
{order_by: 'DATE_ADDED', filter_mode: null, language: null}
);
// Get the connection instance for the connection that only contains
// comments made by friends
const connectionRecordFriendsOnly = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
{order_by: null, filter_mode: 'FRIENDS_ONLY', langugage: null}
);
}
This implies that by default, each combination of values used for filters will produce a different record for the connection.
When making updates to a connection, you will need to make sure to update all of the relevant records affected by a change. For example, if we were to add a new comment to our example connection, we'd need to make sure not to add the comment to the FRIENDS_ONLY
connection, if the new comment wasn't made by a friend of the user:
const {ConnectionHandler} = require('react-relay');
function updater(store: RecordSourceSelectorProxy) {
const storyRecord = store.get(storyID);
// Get the connection instance for the connection with comments sorted
// by the date they were added
const connectionRecordSortedByDate = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
{order_by: '*DATE_ADDED*', filter_mode: null, language: null}
);
// Get the connection instance for the connection that only contains
// comments made by friends
const connectionRecordFriendsOnly = ConnectionHandler.getConnection(
storyRecord,
'StoryComponent_story_comments_connection',
{order_by: null, filter_mode: '*FRIENDS_ONLY*', language: null}
);
const newComment = (...);
const newEdge = (...);
ConnectionHandler.insertEdgeAfter(
connectionRecordSortedByDate,
newEdge,
);
if (isMadeByFriend(storyRecord, newComment) {
// Only add new comment to friends-only connection if the comment
// was made by a friend
ConnectionHandler.insertEdgeAfter(
connectionRecordFriendsOnly,
newEdge,
);
}
}
Managing connections with many filters:
As you can see, just adding a few filters to a connection can make the complexity and number of connection records that need to be managed explode. In order to more easily manage this, Relay provides 2 strategies:
- Specify exactly which filters should be used as connection identifiers.
By default, all non-pagination filters will be used as part of the connection identifier. However, when declaring a @connection
, you can specify the exact set of filters to use for connection identity:
const {graphql} = require('react-relay');
const storyFragment = graphql`
fragment StoryComponent_story on Story {
comments(
order_by: $orderBy
filter_mode: $filterMode
language: $language
)
@connection(
key: "StoryComponent_story_comments_connection"
filters: ["order_by", "filter_mode"]
) {
edges {
nodes {
body {
text
}
}
}
}
}
`;
- By specifying
filters
when declaring the@connection
, we're indicating to Relay the exact set of filter values that should be used as part of connection identity. In this case, we're excludinglanguage
, which means that only values fororder_by
andfilter_mode
will affect connection identity and thus produce new connection records. - Conceptually, this means that we're specifying which arguments affect the output of the connection from the server, or in other words, which arguments are actually filters. If one of the connection arguments doesn't actually change the set of items that are returned from the server, or their ordering, then it isn't really a filter on the connection, and we don't need to identify the connection differently when that value changes. In our example, changing the
language
of the comments we request doesn't change the set of comments that are returned by the connection, so it is safe to exclude it fromfilters
. - This can also be useful if we know that any of the connection arguments will never change in our app, in which case it would also be safe to exclude from
filters
.
- An easier API alternative to manage multiple connections with multiple filter values is still pending
TODO
Advanced Pagination Use Cases
In this section we're going to cover how to implement more advanced pagination use cases than the default cases covered by usePaginationFragment
.
Pagination Over Multiple Connections
If you need to paginate over multiple connections within the same component, you can use usePaginationFragment
multiple times:
import type {CombinedFriendsListComponent_user$key} from 'CombinedFriendsListComponent_user.graphql';
import type {CombinedFriendsListComponent_viewer$key} from 'CombinedFriendsListComponent_viewer.graphql';
const React = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
user: CombinedFriendsListComponent_user$key,
viewer: CombinedFriendsListComponent_viewer$key,
|};
function CombinedFriendsListComponent(props: Props) {
const {data: userData, ...userPagination} = usePaginationFragment(
graphql`
fragment CombinedFriendsListComponent_user on User {
name
friends
@connection(
key: "CombinedFriendsListComponent_user_friends_connection"
) {
edges {
node {
name
age
}
}
}
}
`,
props.user,
);
const {data: viewerData, ...viewerPagination} = usePaginationFragment(
graphql`
fragment CombinedFriendsListComponent_user on Viewer {
actor {
... on User {
name
friends
@connection(
key: "CombinedFriendsListComponent_viewer_friends_connection"
) {
edges {
node {
name
age
}
}
}
}
}
}
`,
props.viewer,
);
return (...);
}
However, we recommend trying to keep a single connection per component, to keep the components easier to follow.
Bi-directional Pagination
In the Pagination section we covered how to use usePaginationFragment
to paginate in a single "forward" direction. However, connections also allow paginating in the opposite "backward" direction. The meaning of "forward" and "backward" directions will depend on how the items in the connection are sorted, for example "forward" could mean more recent, and "backward" could mean less recent.
Regardless of the semantic meaning of the direction, Relay also provides the same APIs to paginate in the opposite direction using usePaginationFragment
, as long as the before
and last
connection arguments are also used along with after
and first
:
import type {FriendsListComponent_user$key} from 'FriendsListComponent_user.graphql';
const React = require('React');
const {Suspense} = require('React');
const {graphql, usePaginationFragment} = require('react-relay/hooks');
type Props = {|
userRef: FriendsListComponent_user$key,
|};
function FriendsListComponent(props: Props) {
const {
data,
loadPrevious,
hasPrevious,
// ... forward pagination values
} = usePaginationFragment(
graphql`
fragment FriendsListComponent_user on User {
name
friends(after: $after, before: $before, first: $first, last: $last)
@connection(key: "FriendsListComponent_user_friends_connection") {
edges {
node {
name
age
}
}
}
}
`,
userRef,
);
return (
<>
<h1>Friends of {data.name}:</h1>
<List items={data.friends?.edges.map(edge => edge.node)}>
{node => {
return (
<div>
{node.name} - {node.age}
</div>
);
}}
</List>
{hasPrevious ? (
<Button onClick={() => loadPrevious(10)}>
Load more friends
</Button>
) : null}
{/* Forward pagination controls can go simultaneously here */}
</>
);
}
- The APIs for both "forward" and "backward" are exactly the same, they're only named differently. When paginating forward, then the
after
andfirst
connection arguments will be used, when paginating backward, thebefore
andlast
connection arguments will be used. - Note that the primitives for both "forward" and "backward" pagination are exposed from a single use of
usePaginationFragment
call, so both "forward" and "backward" pagination can be performed simultaneously in the same component.
Custom Connection State
By default, when using usePaginationFragment
and @connection
, Relay will append new pages of items to the connection when paginating "forward", and prepend new pages of items when paginating "backward". This means that your component will always render the full connection, with all of the items that have been accumulated so far via pagination, and/or items that have been added or removed via mutations or subscriptions.
However, it is possible that you'd need different behavior for how to merge and accumulate pagination results (or other updates to the connection), and/or derive local component state from changes to the connection. Some examples of this might be:
- Keeping track of different visible slices or windows of the connection.
- Visually separating each page of items. This requires knowledge of the exact set of items inside each page that has been fetched.
- Displaying different ends of the same connection simultaneously, while keeping track of the "gaps" between them, and being able to merge results when preforming pagination between the gaps. For example, imagine rendering a list of comments where the oldest comments are displayed at the top, then a "gap" that can be interacted with to paginate, and then a section at the bottom which shows the most recent comments that have been added by the user or by real-time subscriptions.
To address these more complex use cases, Relay is still working on a solution:
TODO
Refreshing connections
TODO
Prefetching Pages of a Connection
TODO
Rendering One Page of Items at a Time
TODO
Advanced Data Fetching
Preloading Data
Preloading Data for Initial Load (Server Preloading)
OSS TODO
Preloading Data for Transitions, in Parallel With Code (Client Preloading)
The way navigations or transitions into different pages work by default is the following:
- first, we load the code necessary to render that new page, since that will usually correspond to a separate JS bundle.
- *then, once the code for the new page is loaded, we can start rendering it, and only at that point when we start rendering the page do we send a network request to fetch the data that the page needs, for example by using
useLazyLoadQuery
(Queries).
This not only applies to transitions to other pages, but also for displaying elements such as dialogs, menus, popovers, or other elements that are hidden behind some user interaction, and which have both code and data dependencies.
The problem with this naive approach is that we have to wait for a significant amount of time before we can actually start fetching the data we need. Ideally, by the time a user interaction occurs, we'd already know what data we will need in order to fulfill that interaction, and we could start preloading it from the client immediately, in parallel with loading the JS code that we're going to need; by doing so, we can significantly speed up the amount of time it takes to show content to users after an interaction.
In order to do so, we can use Relay EntryPoints, which are a set of APIs for efficiently loading both the code and data dependencies of any view in parallel. Check out our api reference for Entry Points:
Incremental Data Delivery
OSS TODO
Data-driven Dependencies
OSS TODO
Image Prefetching
The standard approach to loading images with Relay is to first request image URIs via a Relay fragment, then render an appropriate image component with the resulting URI as the source. With this approach the image is only downloaded if it is actually rendered, which is often a good tradeoff as it avoids fetching images that aren't used. However, there are some cases where a product knows statically that it will render an image, and in this case performance can be improved by downloading the image as early as possible. Relay image prefetching allows products to specify that specific image URLs be downloaded as early as possible - as soon as the GraphQL data is fetched - without waiting for the consuming component to actually render.
Usage
OSS TODO
When To Use Image Prefetching
We recommend only using preloading for images that will be unconditionally rendered to the DOM by your components soon after being fetched, and avoid prefetching images that are hidden behind an interaction.
Updating Data
Relay holds a local in-memory store of normalized GraphQL data, which accumulates data as GraphQL queries are made throughout usage of our app; think of it as a local database of GraphQL data. When records are updated, any components affected by the updated data will be notified and re-rendered with the updated data.
In this section, we're going to go over how to update data in the server as well as how to update our local data store accordingly, ensuring that our components are kept in sync with the latest data.
GraphQL Mutations
In GraphQL, data in the server is updated using GraphQL Mutations. Mutations are read-write server operations, which both modify data in the backend, and allow querying for the modified data from the server in the same request.
A GraphQL mutation looks very similar to a query, with the exception that it uses the mutation
keyword:
mutation LikePostMutation($input: LikePostData!) {
like_post(data: $input) {
post {
id
viewer_does_like
like_count
}
}
}
- The mutation above modifies the server data to "like" the specified
Post
object. Thelike_post
field is the mutation field itself, which takes specific input and will be processed by the server to update the relevant data in the backend. like_post
returns a specific GraphQL type which exposes the data we can query in the mutation response. In this case, we're querying for the updated post object, including the updatedlike_count
and the updated value forviewer_does_like
, indicating if the current viewer likes the post object.
An example of a successful response for the above mutation could look like this:
{
"like_post": {
"post": {
"id": "post-id",
"viewer_does_like": true,
"like_count": 1,
}
}
}
In Relay, we can declare GraphQL mutations using the graphql
tag too:
const {graphql} = require('react-relay');
const likeMutation = graphql`
mutation LikePostMutation($input: LikePostData!) {
like_post(data: $input) {
post {
id
viewer_does_like
like_count
}
}
}
`;
- Note that mutations can also reference GraphQL Variables in the same way queries or fragments do.
In order to execute a mutation against the server in Relay, we can use the commitMutation
API:
import type {Environment} from 'react-relay';
import type {LikePostData, LikePostMutation} from 'LikePostMutation.graphql';
const {commitMutation, graphql} = require('react-relay');
function commitLikePostMutation(
environment: Environment,
input: LikePostData,
) {
return commitMutation<LikePostMutation>(environment, {
mutation: graphql`
mutation LikePostMutation($input: LikePostData!) {
like_post(data: $input) {
post {
id
viewer_does_like
like_count
}
}
}
`,
variables: {input},
onCompleted: response => {} /* Mutation completed */,
onError: error => {} /* Mutation errored */,
});
}
module.exports = {commit: commitLikePostMutation};
Let's distill what's happening here:
commitMutation
takes an environment, thegraphql
tagged mutation, and the variables to use for sending the mutation request to the server.- Note that the
input
for the mutation can be Flow typed with the autogenerated type available from theLikePostMutation.graphql
module. In general, the Relay will generate Flow types for mutations at build time, with the following naming format:<mutation_name>.graphql.js
. - Note that the
variables
,response
inonComplete
andoptimisticResponse
incommitMutation
will be typed altogether by providing the autogenerated typeLikePostMutation
from theLikePostMutation.graphql
module. To include the type for theoptimisticResponse
, a@raw_response_type
directive should be added to the mutation query root. commitMutation
also takes anonCompleted
andonError
callbacks, which will respectively be called when the request completes successfully or when an error occurs.- When the mutation response is received, if the objects in the mutation response have IDs, the records in the local store will automatically be updated with the new field values from the response. In this case, it would automatically find the existing
Post
object matching the given ID in the store, and update the values for itsviewer_does_like
andlike_count
fields. - Note that any local data updates caused by the mutation will automatically cause components subscribed to the data to be notified of the change and re-render.
Updater Functions
However, if the updates you wish to perform on the local data in response to the mutation are more complex than just updating the values of fields, like deleting or creating new records, or Adding and Removing Items From a Connection, you can provide an updater
function to commitMutation
for full control over how to update the store:
import type {Environment} from 'react-relay';
import type {CommentCreateData, CreateCommentMutation} from 'CreateCommentMutation.graphql';
const {commitMutation, graphql} = require('react-relay');
function commitCommentCreateMutation(
environment: Environment,
postID: string,
input: CommentCreateData,
) {
return commitMutation<CreateCommentMutation>(environment, {
mutation: graphql`
mutation CreateCommentMutation($input: CommentCreateData!) {
comment_create(input: $input) {
comment_edge {
cursor
node {
body {
text
}
}
}
}
}
`,
variables: {input},
onCompleted: () => {},
onError: error => {},
updater: store => {
const postRecord = store.get(postID);
// Get connection record
const connectionRecord = ConnectionHandler.getConnection(
postRecord,
'CommentsComponent_comments_connection',
);
// Get the payload returned from the server
const payload = store.getRootField('comment_create');
// Get the edge inside the payload
const serverEdge = payload.getLinkedRecord('comment_edge');
// Build edge for adding to the connection
const newEdge = ConnectionHandler.buildConnectionEdge(
store,
connectionRecord,
serverEdge,
);
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(
connectionRecord,
newEdge,
);
},
});
}
module.exports = {commit: commitCommentCreateMutation};
Let's distill this example:
updater
takes astore
argument, which is an instance of a[RecordSourceSelectorProxy](https://relay.dev/docs/en/relay-store.html#recordsourceselectorproxy)
; this interface allows you to imperatively write and read data directly to and from the Relay store. This means that you have full control over how to update the store in response to the mutation response: you can create entirely new records, or update or delete existing ones. The full API for reading and writing to the Relay store is available here: https://relay.dev/docs/en/relay-store.html- In our specific example, we're adding a new comment to our local store after it has successfully been added on the server. Specifically, we're adding a new item to a connection; for more details on the specifics of how that works, check out our Adding and Removing Items From a Connection section.
- Note that the mutation response is a root field record that can be read from the
store
, specifically using thestore.getRootField
API. In our case, we're reading thecomment_create
root field, which is a root field in the mutation response. - Note that any local data updates caused by the mutation
updater
will automatically cause components subscribed to the data to be notified of the change and re-render.
Optimistic updates
Often times when executing a mutation we don't want to wait for the server response to complete before we respond to user interaction. For example, if a user clicks the "Like" button, we don't want to wait until the mutation response comes back before we show them that the post has been liked; ideally, we'd do that instantly.
More generally, in these cases we want to immediately update our local data optimistically, in order to improve perceived responsiveness; that is, we want to update our local data to immediately reflect what it would look like after the mutation succeeds. If the mutation ends up not succeeding, we can roll back the change and show an error message, but we're optimistically expecting the mutation to succeed most of the time.
In order to do this, Relay provides 2 APIs to specify an optimistic update when executing a mutation:
Optimistic Response
When you can predict what the server response for a mutation is going to be, the simplest way to optimistically update the store is by providing an optimisticResponse
to commitMutation
:
import type {Environment} from 'react-relay';
import type {LikePostData, LikePostMutation} from 'LikePostMutation.graphql';
const {commitMutation, graphql} = require('react-relay');
function commitLikePostMutation(
environment: Environment,
postID: string,
input: LikePostData,
) {
return commitMutation<LikePostMutation>(environment, {
mutation: graphql`
mutation LikePostMutation($input: LikePostData!)
@raw_response_type {
like_post(data: $input) {
post {
id
viewer_does_like
}
}
}
`,
variables: {input},
optimisticResponse: {
like_post: {
post: {
id: postID,
viewer_does_like: true,
},
},
},
onCompleted: () => {} /* Mutation completed */,
onError: error => {} /* Mutation errored */,
});
}
module.exports = {commit: commitLikePostMutation};
Let's see what's happening in this example.
- The
optimisticResponse
is an object matching the shape of the mutation response, and it simulates a successful response from the server. WhenoptimisticResponse
, is provided, Relay will automatically process the response in the same way it would process the response from the server, and update the data accordingly (i.e. update the values of fields for the record with the matching id).- In this case, we would immediately set the
viewer_does_like
field totrue
in ourPost
object, which would be immediately reflected in our UI.
- In this case, we would immediately set the
- If the mutation succeeds, the optimistic update will be rolled back, and the server response will be applied.
- If the mutation fails, the optimistic update will be rolled back, and the error will be communicated via the
onError
callback. - Note that by adding
@raw_response_type
directive, the type foroptimisticResponse
is generated , and the flow type is applied by:commitMutation<LikePostMutation>
.
Optimistic Updater
However, in some cases we can't statically predict what the server response will be, or we need to optimistically perform more complex updates, like deleting or creating new records, or Adding and Removing Items From a Connection. In these cases we can provide an optimisticUpdater
function to commitMutation
. For example, we can rewrite the above example using an optimisticUpdater
instead of an optimisticResponse
:
import type {Environment} from 'react-relay';
import type {LikePostData} from 'LikePostMutation.graphql';
const {commitMutation, graphql} = require('react-relay');
function commitLikePostMutation(
environment: Environment,
postID: string,
input: LikePostData,
) {
return commitMutation(environment, {
mutation: graphql`
mutation LikePostMutation($input: LikePostData!) {
like_post(data: $input) {
post {
id
like_count
viewer_does_like
}
}
}
`,
variables: {input},
optimisticUpdater: store => {
// Get the record for the Post object
const postRecord = store.get(postID);
// Read the current value for the like_count
const currentLikeCount = postRecord.getValue('like_count');
// Optimistically increment the like_count by 1
postRecord.setValue((currentLikeCount ?? 0) + 1, 'like_count');
// Optimistically set viewer_does_like to true
postRecord.setValue(true, 'viewer_does_like');
},
onCompleted: () => {} /* Mutation completed */,
onError: error => {} /* Mutation errored */,
});
}
module.exports = {commit: commitLikePostMutation};
Let's see what's happening here:
- The
optimisticUpdater
has the same signature and behaves the same way as the regularupdater
function, the main difference being that it will be executed immediately, before the mutation response completes. - If the mutation succeeds, the optimistic update will be rolled back, and the server response will be applied.
- Note that if we used an
optimisticResponse
, we wouldn't able to statically provide a value forlike_count
, since it requires reading the current value from the store first, which we can do with anoptimisticUpdater
. - Also note that when mutation completes, the value from the server might differ from the value we optimistically predicted locally. For example, if other "Likes" occurred at the same time, the final
like_count
from the server might've incremented by more than 1.
- Note that if we used an
- If the mutation fails, the optimistic update will be rolled back, and the error will be communicated via the
onError
callback. - Note that we're not providing an
updater
function, which is okay. If it's not provided, the default behavior will still be applied when the server response arrives (i.e. merging the new field values forlike_count
andviewer_does_like
on thePost
object).
NOTE: Remember that any updates to local data caused by a mutation will automatically notify and re-render components subscribed to that data.
Order of Execution of Updater Functions
In general, execution of the updater
and optimistic updates will occur in the following order:
- If an
optimisticResponse
is provided, Relay will use it to merge the new field values for the records that match the ids in theoptimisticResponse
. - If
optimisticUpdater
is provided, Relay will execute it and update the store accordingly. - If the mutation request succeeds:
- Any optimistic update that was applied will be rolled back.
- Relay will use the server response to merge the new field values for the records that match the ids in the response.
- If an
updater
was provided, Relay will execute it and update the store accordingly. The server payload will be available to theupdater
as a root field in the store.
- If the mutation request fails:
- Any optimistic update was applied will be rolled back.
- The
onError
callback will be called.
Full Example
This means that in more complicated scenarios you can still provide all 3 options: optimisticResponse
, optimisticUpdater
and updater
. For example, the mutation to add a new comment could like something like the following (for full details on updating connections, check out our Adding and Removing Items From a Connection guide):
import type {Environment} from 'react-relay';
import type {CommentCreateData, CreateCommentMutation} from 'CreateCommentMutation.graphql';
const {commitMutation, graphql} = require('react-relay');
function commitCommentCreateMutation(
environment: Environment,
postID: string,
input: CommentCreateData,
) {
return commitMutation<CreateCommentMutation>(environment, {
mutation: graphql`
mutation CreateCommentMutation($input: CommentCreateData!) {
comment_create(input: $input) {
post {
id
viewer_has_commented
}
comment_edge {
cursor
node {
body {
text
}
}
}
}
}
`,
variables: {input},
onCompleted: () => {},
onError: error => {},
// Optimistically set the value for `viewer_has_commented`
optimisticResponse: {
post: {
id: postID,
viewer_has_commented: true,
},
},
// Optimistically add a new comment to the comments connection
optimisticUpdater: store => {
const postRecord = store.get(postID);
const connectionRecord = ConnectionHandler.getConnection(
userRecord,
'CommentsComponent_comments_connection',
);
// Create a new local Comment from scratch
const id = `client:new_comment:${randomID()}`;
const newCommentRecord = store.create(id, 'Comment');
// ... update new comment with content
// Create new edge from scratch
const newEdge = ConnectionHandler.createEdge(
store,
connectionRecord,
newCommentRecord,
'CommentEdge' /* GraphQl Type for edge */,
);
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(connectionRecord, newEdge);
},
updater: store => {
const postRecord = store.get(postID);
const connectionRecord = ConnectionHandler.getConnection(
userRecord,
'CommentsComponent_comments_connection',
);
// Get the payload returned from the server
const payload = store.getRootField('comment_create');
// Get the edge from server payload
const newEdge = payload.getLinkedRecord('comment_edge');
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(connectionRecord, newEdge);
},
});
}
module.exports = {commit: commitCommentCreateMutation};
Let's distill this example, according to the execution order of the updaters:
- Given that an
optimisticResponse
was provided, it will be executed first. This will cause the new value ofviewer_has_commented
to be merged into the existingPost
object, setting it totrue
. - Given that an
optimisticResponse
was provided, it will be executed next. OuroptimisticUpdater
will create new comment and edge records from scratch, simulating what the new edge in the server response would look like, and then add the new edge to the connection. - When the optimistic updates conclude, components subscribed to this data will be notified.
- When the mutation succeeds, all of our optimistic updates will be rolled back.
- The server response will be processed by Relay, and this will cause the new value of
viewer_has_commented
to be merged into the existingPost
object, setting it totrue
. - Finally, the
updater
function we provided will be executed. Theupdater
function is very similar to theoptimisticUpdater
function, however, instead of creating the new data from scratch, it reads it from the mutation payload and adds the new edge to the connection.
Invalidating Data during a Mutation
The recommended approach when executing a mutation is to request all the relevant data that was affected by the mutation back from the server (as part of the mutation body), so that our local Relay store is consistent with the state of the server.
However, often times it can be unfeasible to know and specify all the possible data the possible data that would be affected for mutations that have large rippling effects (e.g. imagine “blocking a user” or “leaving a group”).
For these types of mutations, it’s often more straightforward to explicitly mark some data as stale (or the whole store), so that Relay knows to refetch it the next time it is rendered. In order to do so, you can use the data invalidation apis documented in our Staleness of Data section.
Mutation Queueing
TODO: Left to be implemented in user space
GraphQL Subscriptions
GraphQL Subscriptions (GQLS) are a mechanism which allows clients to subscribe to changes in a piece of data from the server, and get notified whenever that data changes.
A GraphQL Subscription looks very similar to a query, with the exception that it uses the subscription keyword:
subscription LikePostSubscription($input: LikePostSubscribeData!) {
like_post_subscribe(data: $input) {
post {
id
like_count
}
}
}
- Subscribing to the above subscription will notify the client whenever the specified
Post
object has been "liked" or "unliked". Thelike_post_subscribe
field is the subscription field itself, which takes specific input and will set up the subscription in the backend. like_post_subscribe
returns a specific GraphQL type which exposes the data we can query in the subscription payload; that is, whenever the client is notified, it will receive the subscription payload in the notification. In this case, we're querying for the Post object with it's updatedlike_count
, which will allows us to show the like count in real time.
An example of a subscription payload received by the client could look like this:
{
"like_post_subscribe": {
"post": {
"id": "post-id",
"like_count": 321,
}
}
}
In Relay, we can declare GraphQL subcriptions using the graphql
tag too:
const {graphql} = require('react-relay');
const postLikeSubscription = graphql`
subscription LikePostSubscription($input: LikePostSubscribeData!) {
like_post_subscribe(data: $input) {
post {
id
like_count
}
}
}
`;
- Note that subscriptions can also reference GraphQL Variables in the same way queries or fragments do.
In order to execute a subscription against the server in Relay, we can use the requestSubscription
API:
import type {Environment} from 'react-relay';
import type {LikePostSubscribeData} from 'LikePostSubscription.graphql';
const {graphql, requestSubscription} = require('react-relay');
function postLikeSubscribe(
environment: Environment,
postID: string,
input: LikePostSubscribeData,
) {
return requestSubscription(environment, {
subscription: graphql`
subscription LikePostSubscription(
$input: LikePostSubscribeData!
) {
like_post_subscribe(data: $input) {
post {
id
like_count
}
}
}
`,
variables: {input},
onCompleted: () => {} /* Subscription established */,
onError: error => {} /* Subscription errored */,
onNext: response => {} /* Subscription payload received */
});
}
module.exports = {subscribe: postLikeSubscribe};
Let's distill what's happening here:
requestSubscription
takes an environment, thegraphql
tagged subscription, and the variables to use.- Note that the
input
for the subscription can be Flow typed with the autogenerated type available from theLikePostSubscription.graphql
module. In general, the Relay will generate Flow types for subscriptions at build time, with the following naming format:<subscription_name>.graphql.js
. requestSubscription
also takes anonCompleted
andonError
callbacks, which will respectively be called when the subscription is successfully established, or when an error occurs.requestSubscription
also takes anonNext
callback, which will be called whenever a subscription payload is received.- When the subscription payload is received, if the objects in the subscription payload have IDs, the records in the local store will automatically be updated with the new field values from the payload. In this case, it would automatically find the existing
Post
object matching the given ID in the store, and update the values for thelike_count
field. - Note that any local data updates caused by the subscription will automatically cause components subscribed to the data to be notified of the change and re-render.
However, if the updates you wish to perform on the local data in response to the subscription are more complex than just updating the values of fields, like deleting or creating new records, or Adding and Removing Items From a Connection, you can provide an **updater**
function to requestSubscription
for full control over how to update the store:
import type {Environment} from 'react-relay';
import type {CommentCreateSubscribeData} from 'CommentCreateSubscription.graphql';
const {graphql, requestSubscription} = require('react-relay');
function commentCreateSubscribe(
environment: Environment,
postID: string,
input: CommentCreateSubscribeData,
) {
return requestSubscription(environment, {
subscription: graphql`
subscription CommentCreateSubscription(
$input: CommentCreateSubscribeData!
) {
comment_create_subscribe(data: $input) {
post_comment_edge {
cursor
node {
body {
text
}
}
}
}
}
`,
variables: {input},
updater: store => {
const postRecord = store.get(postID);
// Get connection record
const connectionRecord = ConnectionHandler.getConnection(
postRecord,
'CommentsComponent_comments_connection',
);
// Get the payload returned from the server
const payload = store.getRootField('comment_create_subscribe');
// Get the edge inside the payload
const serverEdge = payload.getLinkedRecord('post_comment_edge');
// Build edge for adding to the connection
const newEdge = ConnectionHandler.buildConnectionEdge(
store,
connectionRecord,
serverEdge,
);
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(connectionRecord, newEdge);
},
onCompleted: () => {} /* Subscription established */,
onError: error => {} /* Subscription errored */,
onNext: response => {} /* Subscription payload received */,
});
}
module.exports = {subscribe: commentCreateSubscribe};
Let's distill this example:
updater
takes astore
argument, which is an instance of a[RecordSourceSelectorProxy](https://relay.dev/docs/en/relay-store.html#recordsourceselectorproxy)
; this interface allows you to imperatively write and read data directly to and from the Relay store. This means that you have full control over how to update the store in response to the subscription payload: you can create entirely new records, or update or delete existing ones. The full API for reading and writing to the Relay store is available here: https://relay.dev/docs/en/relay-store.html- In our specific example, we're adding a new comment to our local store when we receive a subscription payload notifying us that a new comment has been created. Specifically, we're adding a new item to a connection; for more details on the specifics of how that works, check out our Adding and Removing Items From a Connection section.
- Note that the subscription payload is a root field record that can be read from the
store
, specifically using thestore.getRootField
API. In our case, we're reading thecomment_create_subcribe
root field, which is a root field in the subscription response. - Note that any local data updates caused by the mutation
updater
will automatically cause components subscribed to the data to be notified of the change and re-render.
Local Data Updates
There are a couple of APIs that Relay provides in order to make purely local updates to the Relay store (i.e. updates not tied to a server operation).
Note that local data updates can be made both on client-only data, or on regular data that was fetched from the server via an operation.
commitLocalUpdate
To make updates using an updater
function, you can use the commitLocalUpdate
API:
import type {Environment} from 'react-relay';
const {commitLocalUpdate, graphql} = require('react-relay');
function commitCommentCreateLocally(
environment: Environment,
postID: string,
) {
return commitLocalUpdate(environment, store => {
const postRecord = store.get(postID);
const connectionRecord = ConnectionHandler.getConnection(
userRecord,
'CommentsComponent_comments_connection',
);
// Create a new local Comment from scratch
const id = `client:new_comment:${randomID()}`;
const newCommentRecord = store.create(id, 'Comment');
// ... update new comment with content
// Create new edge from scratch
const newEdge = ConnectionHandler.createEdge(
store,
connectionRecord,
newCommentRecord,
'CommentEdge' /* GraphQl Type for edge */,
);
// Add edge to the end of the connection
ConnectionHandler.insertEdgeAfter(connectionRecord, newEdge);
});
}
module.exports = {commit: commitCommentCreateLocally};
commitLocalUpdate
update simply takes an environment and an updater function.updater
takes astore
argument, which is an instance of a[RecordSourceSelectorProxy](https://relay.dev/docs/en/relay-store.html#recordsourceselectorproxy)
; this interface allows you to imperatively write and read data directly to and from the Relay store. This means that you have full control over how to update the store: you can create entirely new records, or update or delete existing ones. The full API for reading and writing to the Relay store is available here: https://relay.dev/docs/en/relay-store.html
- In our specific example, we're adding a new comment to our local store when. Specifically, we're adding a new item to a connection; for more details on the specifics of how that works, check out our Adding and Removing Items From a Connection section.
- Note that any local data updates will automatically cause components subscribed to the data to be notified of the change and re-render.
CommitPayload
commitPayload
takes an OperationDescriptor
and the payload for the query, and writes it to the Relay Store. The payload will be resolved like a normal server response for a query.
import type {FooQueryRawResponse} from 'FooQuery.graphql'
const {createOperationDescriptor} = require('relay-runtime');
const operationDescriptor = createOperationDescriptor(FooQuery, {
id: 'an-id',
otherVariable: 'value',
});
const payload: FooQueryRawResponse = {...};
environment.commitPayload(operation, payload);
- An
OperationDescriptor
can be created by usingcreateOperationDescriptor
; it takes the query and the query variables. - The payload can be typed using the Flow type generated by adding @raw_response_type to the query.
- Note that any local data updates will automatically cause components subscribed to the data to be notified of the change and re-render.
Client-Only Data (Client Schema Extensions)
Relay provides the ability to extend the GraphQL schema on the client (i.e. in the browser), via client schema extensions, in order to model data that only needs to be created, read and updated on the client. This can be useful to add small pieces of information to data that is fetched from the server, or to entirely model client-specific state to be stored and managed by Relay.
Client schema extensions allows you to modify existing types on the schema (e.g. by adding new fields to a type), or to create entirely new types that only exist in the client.
Adding a Client Schema file
To add a client schema, create a new .graphql
file inside your src directory. The file can be named anything.
Extending Existing Types
In order to extend an existing type, add a .graphql
file to the appropriate schema extension file:
extend type Comment {
is_new_comment: Boolean
}
- In this example, we're using the
extend
keyword to extend an existing type, and we're adding a new field,is_new_comment
to the existingComment
type, which we will be able to read in our components, and update when necessary using normal Relay APIs; you might imagine that we might use this field to render a different visual treatment for a comment if it's new, and we might set it when creating a new comment. - Note that in order for Relay to pick up this extension, the file needs to be inside your src directory. The file can be named anything, e.g.:
clientSchema.graphql
.
Adding New Types
You can define types using the same regular GraphQL syntax, by defining it inside your client schema file:
enum FetchStatus {
FETCHED
PENDING
ERRORED
}
type FetchState {
# You can reuse client-only types to define other types
status: FetchStatus
# You can also reference regular server types
started_by: User!
}
extend type Item {
# You can extend server types with client-only types
fetch_state: FetchState
}
- In this contrived example, we're defining 2 new client-only types, and
enum
and a regulartype
. Note that they can reference themselves as normal, and reference regular server defined types. Also note that we can extend server types and add fields that are of our client-only types. - As mentioned previously, we will be able read and update this data normally via Relay APIs.
Reading Client-Only Data
We can read client-only data be selecting it insidefragments or queries as normal:
const data = *useFragment*(
graphql`
fragment CommentComponent_comment on Comment {
# We can select client-only fields as we would any other field
is_new_comment
body {
text
}
}
`,
props.user,
);
Updating Client-Only Data
In order to update client-only data, you can do so regularly inside mutation or subscription updaters, or by using our primitives for doing local updates to the store.
Local Application State Management
TODO
Roughly, at a high level:
- Read data from Relay
- Keep your state in React, possibly derive it from Relay data
- Write data back to Relay via mutations or local update
Accessing Data Outside React
This section covers less common use cases, which involve fetching and accessing data outside of our React APIs. Most of the time you will be directly using our React APIs, so you don't need to know this to start building with Relay. However, these APIs can be useful for more advanced use cases when you need more control over how data is fetched and managed, for example when writing pieces of infrastructure on top of Relay.
Fetching Queries
If you want to fetch a query outside of React, you can use the fetchQuery
function, which returns an observable:
import type {AppQuery} from 'AppQuery.graphql';
const {fetchQuery} = require('react-relay/hooks');
fetchQuery<AppQuery>(
environment,
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
{id: 4},
)
.subscribe({
start: () => {...},
complete: () => {...},
error: (error) => {...},
next: (data) => {...}
});
fetchQuery
will automatically save the fetched data to the in-memory Relay store, and notify any components subscribed to the relevant data.fetchQuery
will NOT retain the data for the query, meaning that it is not guaranteed that the data will remain saved in the Relay store at any point after the request completes. If you wish to make sure that the data is retained outside of the scope of the request, you need to callenvironment.retain()
directly on the query to ensure it doesn't get deleted. See Retaining Queries for more details.- The data provided in the
next
callback represents a snapshot of the query data read from the Relay store at the moment a payload was received from the server. - Note that we specify the
AppQuery
Flow type; this ensures that the type of the data provided by the observable matches the shape of the query, and enforces that thevariables
passed as input tofetchQuery
match the type of the variables expected by the query.
If desired, you can convert the request into a Promise using .toPromise()
:
import type {AppQuery} from 'AppQuery.graphql';
const {fetchQuery} = require('react-relay/hooks');
fetchQuery<AppQuery>(
environment,
graphql`
query AppQuery($id: ID!) {
user(id: $id) {
name
}
}
`,
{id: 4},
)
.toPromise()
.then(data => {...})
.catch(error => {...};
- The returned Promise that resolves to the query data, read out from the store when the first network response is received from the server. If the request fails, the promise will reject
- Note that we specify the
AppQuery
Flow type; this ensures that the type of the data the the promise will resolve to matches the shape of the query, and enforces that thevariables
passed as input tofetchQuery
match the type of the variables expected by the query.
See also our API Reference for fetchQuery.
Prefetching Queries
This section covers prefetching queries from the client (if you're interested in preloading for initial load or transitions, see our Preloading Data section). Prefetching queries can be useful to anticipate user actions and increase the likelihood of data being immediately available when the user requests it.
TODO
Subscribing to Queries
TODO
Reading Queries from Local Cache
TODO
Reading Fragments from Local Cache
TODO
Retaining Queries
In order to manually retain a query so that the data it references isn't garbage collected by Relay, we can use the environment.retain
method:
const {
createOperationDescriptor,
getRequest,
graphql,
} = require('relay-runtime')
// Query graphql object
const query = graphql`...`;
// Construct Relay's internal representation of the query
const queryRequest = getRequest(query)
const queryDescriptor = createOperationDescriptor(
queryRequest,
variables
);
// Retain query; this will prevent the data for this query and
// variables from being garbage collected by Relay
const disposable = environment.retain(queryDescriptor);
// Disposing of the disposable will release the data for this query
// and variables, meaning that it can be deleted at any moment
// by Relay's garbage collection if it hasn't been retained elsewhere
disposable.dispose();
- NOTE: Relay automatically manages the query data retention based on any mounted query components that are rendering the data, so* you usually should not need to call
retain
directly within product code. For any advanced or special use cases, query data retention should usually be handled within infra-level code, such as a Router.
Testing
See this guide for Testing Relay Components, which also applies for any components built using Relay Hooks.