Many people with autism are stressed individuals who find the world a confusing place (Vermeulen, 2013). So how does someone with autism achieve a sense of flow? McDonnell & Milton (2014) have argued that many repetitive activities may achieve a flow state. One obvious area where flow can be achieved is when engaging in special interests. Special interests allow people to become absorbed in an area that gives them specialist knowledge and a sense of achievement. In addition, certain repetitive tasks can help people achieve a flow like state of mind. These tasks can become absorbing and are an important part of people’s lives. The next time you see an individual with autism engaging in a repetitive task (like stacking Lego or playing a computer game), remember that these are not in themselves negative activities, they may well be reducing stress.
If you want to improve your supports to people with autism from a stress perspective, a useful tool is to identify flow states for that person and try to develop a flow plan. Remember, the next time you see a person repeating seemingly meaningless behaviours, do not assume that this is always unpleasant for them – it might be a flow state, and beneficial for reducing stress.
Flow state is a term coined by Csikszentmihalyi to describe “the experience of complete absorption in the present moment” (Nakamura and Csikszentmihalyi, 2009). It is widely viewed as highly positive and many texts advise readers on how to attain it when performing tasks. Autistic people are sometimes puzzled that flow seems to be regarded as somewhat elusive and difficult to experience, since the common autistic experience of complete engagement with an interest fits the definition of flow well. Thus, it is not hard to find accounts of autistic detailed listening that seem to describe a flow state:
“When I work on my musical projects, I tend to hear the whole score in my head and piece every instrument loop detail where they fit. It relaxes me and makes me extremely aware of what I’m doing to the point that I lose track of time.”
Time flows differently when children work together, the older becoming aspirational peers for younger children, no bells demanding that they stop what they are doing to move in short blocks of time from math to reading to science to history in a repetitive daily cycle. Instead, they work on projects that engage them in experiences across content areas and extend time as they see the need.
We lose so much when we divide students by age… We lose peer mentoring, we lose the aspirations to be “like the big kids,” we lose the ability of younger kids to become leaders, and we lose the ability to let kids grow at their own rate. We also lose the shared public space which lies at the heart of community, culture, and democracy.
“When I talk about multiage learning, I don’t mean streaming. I imagine joyful, collaborative, hands-on, individualised learning that students personalise based on their interests, strengths, and needs. They create the context, we then add the content.” https://t.co/XDzami4aaj
We’ve noticed this with homeschooling/unschooling networks using programs like Science Olympiad. Students with Olympiad experience loop through helping newcomers and younger kids. They get to demonstrate their expertise and teach.
In the screenshot below, three freshly uploaded images are shown in the editor. When I tap an image, I see Edit in the resulting menu. I’m on the right track, yet already confused. I’m not sure which image I’m operating on. The selection indicator is too subtle, requiring me to get closer to the screen.
If the cursor is located after the selected image, the editor scrolls down to center the cursor upon exiting the Media Options flow. The image I was editing is now partially offscreen, increasing ambiguity.
This ambiguity creates anxiety. Which image am I editing? The “Media Settings” page offers no context.
My go-to anxiety flow in the face of such ambiguity is to go back and reorient. If I “Cancel” to return to the editor in order to establish context, the editor offers more ambiguity instead of reassurance by scrolling down as mentioned above. Even without the scrolling, the blue selection indicator requires me to squint. Selection visually collides with the cursor (which is image height when on the same line as an image), increasing ambiguity further.
I write descriptive captions in the interest of accessibility. I need to see the image as I do this. Here’s what image captioning looks like in Ulysses on macOS.
And here’s what it looks like in Ulysses on iOS. A little scrolling back and forth between the image and description field is needed, but at least they’re on the same page. I’ll gladly scroll if it means getting images large enough for my eyes.
In both of those interfaces, the image is available for reference while captioning. Contrast them with the WP iOS app. The “Caption” screen consists of a single text input field. The image is not displayed. No information about the image is displayed. This means I’ll have to flip back and forth between the WP app and my camera roll app to write a caption.
If I want to consult the image from within the WP app instead of flipping to a different app, the journey is: two taps back to the editor, a bad scroll interaction depending on cursor location, peer over my bifocals at images and selection indicators, and then another three taps back to the Caption field. I’d have to do this over and over to transcribe the screenshots in this post. I started this post in the iOS app and quickly tired.
Calypso on iOS Safari
The iOS app’s caption flow does not work for me. So how about captioning flow on the mobile web? Alas, Calypso on iOS Safari is buggy, erratic, and frustrating to the point that I usually give up on it and go get the laptop. Sometimes, though, I can complete an editing session. In this shot, I’m adding a caption as part of image insertion flow. The image thumbnail is on the small side for me. I need big images when writing captions, especially for screenshots. Otherwise, I have to find the image in an interface that gives me a better view and then correlate back and forth.
In the following screenshots, I’m adding a caption to an image after it has been inserted. First, delicately dismiss the Cut Copy bar without dismissing the inline image toolbar hiding behind it. This is fussy and awkward.
And, then, tap the caption button, wonder why it didn’t do anything, scroll down, realize a caption input unfurled below the fold, and start adding a caption.
There’s the possibility of good flow friendly to presbyopia beneath the unfortunately numerous interaction bugs. Though, even with the interaction bugs, at least I don’t have to caption an image I can’t see. There are many times I wish I could use the mobile web interface, but the scroll bleed, vscroll loss, keyboard flyup, lock ups, crashes, requests for more memory, and general unpredictability exclude it from consideration.
Neither interface meets my needs for captioning flow. I need images to be present on the same screen as the fields that describe them. I need access to image views large enough for my presbyopic eyes to transcribe text from screenshots. I need caption fields with enough room to comfortably compose detailed image descriptions.