How some Memphians are focusing on the positive through yoga – WATN – Local 24
Posted: May 5, 2022 at 1:46 am
"It's like we have to charge up our phones, coming to yoga is charging up our brains and our bodies, said Olivia Rokotnitz, Co-Owner of Delta Groove Yoga Studio.
MEMPHIS, Tenn. We all need a break at some point, and how we choose to take that break is up to us.
Many Memphians have taken to yoga and meditation as a way to get away from the stress of the world around them, if only for a moment.
"It's like we have to charge up our phones, coming to yoga is charging up our brains and our bodies. Coming to the present moment, practicing meditation, mindfulness, said Olivia Rokotnitz, Co-Owner of Delta Groove Yoga Studio. "It's a total recharge, recalibration for these beautiful brains of ours."
The folks at Delta Groove Yoga Studio said not only can it help your mind it can help your body in many ways.
"I have more strength. I have more flexibility. I have more just general joy and vitality, said Valentine Leonard, Co-Owner of Delta Groove Yoga Studio. "The ways that yoga affects mental health is that it helps you calm down and focus."
"There is a way to start focusing more on the positive and find more joy in life. But we have to work at it, said Rokotnitz.
"You start to be able to focus from within, added Leonard. "And when you're able to start to look from within because it feels safer to be within, then you can really become yourself, fully."
"It feels good in my body and it feels good in my mind, said Jennifer Jones-Boyd.
So what got Jones-Boyd interested in yoga?
"I was walking by the studio because I live in the neighborhood and decided to pop in and picked up some information and thought give it a try. That's all it took."
"Doing the postures on the mat correlates to me to being sometimes uncomfortable in our every day. You learn to breathe through it. You breathe through it. So mentally it helps me to quiet my mind and get out of my own way, she said.
"So you want to start with gentle practice that is mindfulness-based, focused on what your body is doing. How it feels to be doing those things. And that is always grounding. It's always helping to just calm and focus on being in the present rather than your mind just going in every which way, said Leonard.
"The physical body is the beginning, said Rokotnitz. "Then the breath and then this learning to go inwards to really sense inwardly rather than to rely on the external. And that takes you into meditation, mindfulness and awareness of your own mental health. "
She added, "People are realizing that yoga is essentially about balance And it's not just balance of the physical body. It's balance of the mental and also just energy. "
"Give it a try, said Jones-Boyd. "There's no wrong."
This Saturday!So much of our modern life is spent in a forward bending motion. From looking down at...
Read this article:
How some Memphians are focusing on the positive through yoga - WATN - Local 24
Everything we saw and tasted at Midwest Buddhist Mediation Center’s Thai Market – Detroit News
Posted: at 1:46 am
A taste of Thailand returns to Warren with the outdoor market at the Midwest Buddhist Meditation Center, back for a new season.
Offered just twice a month for a few hours on a Sunday, this family-friendly gathering is becoming a popularbut still relatively low-key destination for fans of Asian cuisine. It's a chance to sample a variety of Thai and Laotian street food made fresh by members of the community.
"Our main thing is food," said the center's vice president Lawan Chandruang. She said the vendors have Bangkok street food,cuisine from Laos and this year they have vendors serving food from northern Thailand (think pork cracklings, sticky rice, sausage). "In July we will have more vendors selling a lot of varieties of vegetables, all organic."
When asked what is the one thing someone should try, Chandruang managed to narrow it down two three items: the curry rice (which has many varieties), noodles and desserts.
Some of my favorites were the pork and sticky rice wrapped in banana leaves, the steamed buns with choice of pork or sweet, yellow custard filling and the flaky puffs stuffed with chicken curry, which traveled well and were still crispy the next day.
The Thai market has a lot of sweet treats, too, for kids and adults. Flavored juice, coconut gummy candy, crispy rice cakes with sweet syrup, fried bananas and more.
"Every vendor here, they all have a family recipe ... it's all so good," said one of the MBMC's staff members, Saeng Rhodes.She said the cuisine at the market isn't just a carbon copy of what you'd find at a Thai or Laotian restaurant around town.
In addition to ready-to-eat snacks and meals, there are also vendors selling their own sauces and spices, which you can use to cook at home. Rhodes says they can even customize them on site, if you want more or less heat.
When you arrive, I recommend doing a lap around to see what's cooking and what looks the best. It's tempting to get one of everything, which is affordable, but maybe more than you need for the week. Grab a sweet Thai iced coffee or tea (what a perfect Sunday morning drink, especially when it gets hot outside) for $3 and then decide what your next move is.
Here's what else you need to know about a visit to the Thai Market.
Bring cash. Most of the vendors are only charging a few dollars for their items and cash is easiest and fastest.
Go early, the good stuff will sell out quickly.If one of the stalls has a line, go there first, because whatever they're dishing out is likely a favorite and will sell out. Last Sunday there weren't many lines and there was plentiful parking at an adjacent lot.
Go often, the market will change slightly as the season goes on. Fresh, organic produce becomes more available in the summerand throughout the harvest months. Additional vendors are expected to join the fray as the months go on, too.
Bring the family. There's room to sit and enjoy the pad Thai, papaya salad, noodle soup, steamed banana cake, strawberry milk, steamed buns and meat skewers. The center has a small playground for kids, and the inside is open if you need to use the restroom (you have to remove your shoes, though).
The Sunday markets start at 10 a.m. and run until 1 or 2 p.m., but food may sell out before then. The next one is May 22, followed by June 12 and 26, July 17 and 31, Aug. 14 and 28, Sept. 11 and 25, Oct. 9 and 23 and Nov. 6.
The Midwest Buddhist Meditation Center is at29750 Ryan, in Warren. Call(586) 573-2666 or visitMBMCmichigan.org. Updates about the market can be found on the center's Facebook page.
Restaurant Review: Oakland Tea House in Auburn Hills serves Taiwanese street food
More: Mother's Day dining ideas: brunches, buffets, wine events and more
Twitter: @melodybaetens
Read more:
Everything we saw and tasted at Midwest Buddhist Mediation Center's Thai Market - Detroit News
How reading Meditations by Marcus Aurelius helped me survive the grief of losing my husband – Scroll.in
Posted: at 1:46 am
When I was a child, when I was an adolescent, books saved me from despair: that convinced me that culture was the highest of values.
It is a common misconception that to be a Stoic is to be in possession of a stiff upper lip, to be free from the tumultuous waves of ones emotions. But what this interpretation of Stoicism gets wrong is that our emotions, even the most painful ones, need not be our enemies if we can learn to think of them as our guides.
This might seem obviously false, or like the words of a person who has never encountered real suffering. But it was during one of the worst crises of my life that I found my way to Stoicism and, through Stoicism, to something that is as close to acceptance as I think it is possible to find on this plane of existence.
In September of 2013, my husband suddenly developed the strangest of illnesses. Describing him as sick seems almost farcical as there were not fevers or tumours or anything really that we could point to and say: This this is what is wrong. But there was weakness and fatigue. And above all, there was confusion.
It took a couple of months, but eventually, he was diagnosed with myasthenia gravis: a rare autoimmune disease that we were told normally afflicts women under 40 and men over 60, neither of which he was, and that, all things considered, was relatively minor, and that we could likely expect to go spontaneously into remission over the next five to 10 years.
However, the prognosis turned out to be as off the mark as his chances of developing the disease in the first place. Two days before Thanksgiving, his body began to fail him. The man who had once carried me over a threshold no longer had the strength in his neck to lift his own head off a pillow. I called 911 over his objections and he was brought, protesting, to the hospital where he was ultimately admitted to the intensive-care unit. From there, he continued to decline.
I walked in on Thanksgiving morning as the nurses were moving him to change the sheets on his bed. What I witnessed will stay with me for the rest of my life: the man I love, the father of the one- and five-year-olds I had left at home, went into total respiratory failure. His entire body turned as purple as an eggplant, and I stood by while an emergency intubation was performed to save his life.
For just under a month, he persisted with tubes and machines performing all of his bodily functions. He had few moments of lucidity, most of them in fear, but none more fearful than when I signed the consent form over his objections to have a tracheotomy placed because, I was told, it had ceased to be safe for him to remain intubated the way he was.
That tracheotomy, however, would prove to be what killed him. I would be what proved to kill him. Because, after the crisis was over, after he started to walk again, and after he came home from rehab to have what would prove to be one last Christmas with his children, he asphyxiated in his sleep a mucus plug, caused by the damage done to his trachea killed him just as we had begun to plan for a second chance at life.
I got through the wake and the funeral on an unholy combination of Xanax, vodka and sheer force of will. The first free moment I had afterwards, though, I headed to what has long been my happy place: the Mabel Smith Douglass Library on the Rutgers New Brunswick campus.
I had gotten it into my head that I could find the comfort I desperately needed, if only I could read the Phaedo and convince myself of the immortality of the soul. I cannot say the attempt was successful. And I am still sorry for the poor librarian who had to make sense of my desperate tears at not finding Plato where he was supposed to be. But when she got me to where the books had been moved, it was Marcus Aurelius Meditations that I took off the shelf, and that has made all the difference since.
The books pages contain such simple wisdom that it can seem almost silly to say that I needed to see it written down, but Aurelius injunction to fight to be the person philosophy tried to make you was the battle cry I needed.
I do not think it is an overstatement to say that what I found within the pages of the Meditations rescued me from the despair that was threatening to devour me. Suddenly widowed, with two small children I felt utterly unequipped to vouchsafe through the journey toward adulthood, there was footing to be found in Aurelius instruction not to be overwhelmed by what you imagine, but just do what you can and should. I still had no idea how I would handle my childrens graduations, or puberty, or afford braces, let alone college, but it was a reminder that I did not need to solve those problems now.
Aurelius reminded me that where I was was not just where I was but when and that there was no advantage to be found in unsticking myself from time. I would be lying if I said I learned to stop panicking immediately or instantly. But I learned to repeat to myself the instruction to never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. And I learned to take stock of the tools I had and how they could be used to solve the problems of the present rather than catastrophising the unknowns of the future.
But the passage that made the biggest difference the passage I return to year after year, as deathiversaries or new milestones threaten to drown me in waves of grief is a reminder that the narrative we construct around what happens to us is, ultimately, up to us.
No matter how terrible what happened was, it is still our choice whether to understand our story as one of crippling defeat or a miraculous victory against the odds even if all we do is get back up and learn to stand again.
I will not and cannot say that the death of my husband at just 33 years old is not a misfortune. Nor would I or could I say that I do not think it is an injustice for my two children to live almost the entirety of their lives without their father. But we have endured and prevailed, and that, I have learned to see, is a great good fortune I can celebrate.
Losing a loved one is, as Aurelius said, something that could happen to anyone. But not everyone remains unharmed by it. We mourn, we are not unaware of what we have lost.
But what we have gained is the perspective that true good fortune is what you make for yourself. We hold tighter to each other, to the truth that life is fleeting, and that each moment of joy that finds its way to us is a gift to be treasured. And, perhaps most importantly, we learn that, while we do not get to decide when we get shipwrecked, we do get to decide what we rebuild out of the debris.
This article first appeared on Aeon.
More here:
Great Minds: How meditation and exercise help mental health – New Zealand Herald
Posted: at 1:45 am
NZMEs Great Minds project will examine the state of our nations mental health and explore the growing impact mental health and anxiety has on Kiwis while searching for ways to improve it. Video / NZ Herald
Working full time and having three children means family life is "pretty full on" for James Mooney.
The 42-year-old says one way he keeps his mental health in check is transcendental meditation - an effortless way of dealing with stress and fatigue.
A Rotorua yoga teacher says yoga helps to calm feelings of worry and anxiety, and a Tauranga clinical exercise physiologist says exercise helps people experiencing anxiety and depression feel more at ease.
It comes as NZME launched a major editorial project, Great Minds, which will explore the growing impact of mental health and anxiety on Kiwis and how we can improve our wellbeing.
As well as investigative reporting on the state of our mental health services and the effect of the pandemic on New Zealanders, we share personal stories, interactive features and wellbeing ideas to help our readers as we emerge from Covid.
Mooney said he and his wife have three children - two of whom were home-schooled and the other child has a disability.
He said Covid-19 brought uncertainty and difficulties, such as being in lockdown and being around his children 24 hours a day instead of going to work.
He regularly practises transcendental meditation and meditates for 20 minutes in the morning or evening.
"It decreases the fatigue - it gives your body that space to be able to relax and it takes you to a deeper relaxation than watching TV or going for a walk.
"Once you're able to relax, your brain is able to get the benefits of deep rest ... then it declutters your mind.
22 Apr, 2022 05:00 PMQuick Read
27 Apr, 2022 07:00 PMQuick Read
23 Apr, 2022 01:00 AMQuick Read
5 Apr, 2022 10:00 PMQuick Read
"It has been a good tool in the moments where the pressure does feel like it comes on."
Mooney and his wife first started practising transcendental meditation 10 years ago when they were expecting their first child.
They were approaching a time of their lives where they knew they would need help with stress relief and coping with fatigue, he said.
"For me, mental health is exacerbated by physical things as well.
"If I'm stressed out with work and I'm not sleeping and not fuelling my body well ... that's where the benefits from meditation really step in."
Transcendental meditation teacher Michael Kennedy said it was "a simple technique" of meditation that people practised for about 20 minutes by sitting with their eyes closed.
"It's really practical in that you can do it anywhere ... It's just a technique for settling the mind to a quieter level," the Katikati-based teacher said.
"There's an awareness out there that we need something to help our mental health.
"Everybody's dealing with high levels of stress and that can be disturbing and so people need time to recover from that."
When done regularly, it helped neutralise stress and tension, Kennedy said. Starting the day with meditation could also help people focus better at work.
"The mind is thinking all the time ... people notice that their breathing during a meditation is softer, their muscle systems are relaxed and basically, their mind is more settled during the practice.
"After meditation, they just feel a bit calmer and they feel maybe a little more energy after it."
Rotorua yoga teacher Jenny Lux said yoga was a "holistic" practice that had physical, mental and emotional benefits. It helped support a healthy and "accepting" outlook on life, she said.
"We go through different phases and different struggles and yoga helps to take a step back and observe yourself."
The mental practices of yoga, such as meditation and breathwork, helped to slow the "monkey mind".
"If you are in a frantic, worried or anxious state, breathwork can help you to calm that.
"If you're in a lethargic, stuck or depressed state, it can help to also enliven you."
Lux said there had been a "big upsurge" in demand for online yoga during the pandemic.
Natalja Wiese, clinical exercise physiologist at The Centre for Health, said many people had been coming to the clinic with anxiety due to Covid.
She said people were sometimes in a flight-or-fight state. In the flight state, people felt stressed and could not relax.
But after exercising, their parasympathetic nervous system was more relaxed, she said.
"That way, things like anxiety [and] depression feel more at ease."
Exercise made the heart rate go up so there was more blood flow and more oxygen going through the body, she said.
She said cardiovascular activity helped to get the heart rate up, such as going on the treadmill or going for a walk outside. Activities such as yoga, meditation and breathwork also helped.
"Through this whole pandemic, a lot of people obviously tend to just sit on the couch a lot ... which doesn't help with mental health."
Wiese said it was important to find a balance between working, relaxing and exercising.
If it is an emergency and you feel that you or someone else is at risk, call 111.
For counselling and supportLifeline: Call 0800 543 354 or text 4357 (HELP)Suicide Crisis Helpline: Call 0508 828 865 (0508 TAUTOKO)Need to talk? Call or text 1737Depression helpline: Call 0800 111 757 or text 4202
For children and young peopleYouthline: Call 0800 376 633 or text 234What's Up: Call 0800 942 8787 (11am to 11pm) or webchat (11am to 10.30pm)The Lowdown: Text 5626 or webchat
For help with specific issuesAlcohol and Drug Helpline: Call 0800 787 797Anxiety Helpline: Call 0800 269 4389 (0800 ANXIETY)OutLine: Call 0800 688 5463 (0800 OUTLINE) (6pm-9pm)Safe to talk (sexual harm): Call 0800 044 334 or text 4334All services are free and available 24/7 unless otherwise specified.
For more information and support, talk to your local doctor, hauora, community mental health team, or counselling service.
Follow this link:
Great Minds: How meditation and exercise help mental health - New Zealand Herald
Outdoor installations in Ann Arbor and Ypsi to feature local artists’ meditations on diversity – Second Wave Media
Posted: at 1:45 am
Embracing Our Differences - Southeast Michigan, a nonprofit international art installation celebrating diversity, has selected 31 local adults' and students' artwork to display alongside nationally recognized artists at four Ann Arbor and Ypsilanti parks.
Beginning May 14, visitors at Gallup Park and Leslie Science and Nature Center in Ann Arbor, and Riverside Park and Parkridge Park in Ypsilanti, will be able to enjoy the pieces as billboard-sized banners. The installations are expected to stay up until September.
"All of the banners are meant to spark discussion about diversity. They talk about different kinds of diversity race, sex, LGBTQ, and physical and mental differences, as well," says Nancy Margolis, president of Embracing Our Differences - Southeast Michigan. "The whole idea is to get children and people to think about how there are differences in the world, and how much each of these differences can enrich our lives by understanding them."
Sixty banners will be spread out across the parks. Embracing Our Differences - Southeast Michigan received 40 entries, which were pared down by a panel of local judges, in response to a call for submissions from Washtenaw County residents last year. Some artwork from the original Embracing Our Differences nonprofit, based in Sarasota, Fla., has also been chosen to round out the displays.
"The submissions you'll see from Washtenaw County are just fabulous," Margolis says. "Some are done by children as young as fourth grade. One was done by a group of children who did it as a collaborative effort in Ypsilanti. All are wonderful."
Starting May 17, Embracing Our Differences - Southeast Michigan will offer no-cost field trips to the Gallup Park and Riverside Park installations for all Ann Arbor and Ypsilanti public schools. The docent-led trips are already being booked.
Buoyed by the interest and wide community support, Margolis also hopes to run field trips for children's camps. Like the school field trips, there will be no cost. And for camps that don't have buses, Embracing Our Differences - Southeast Michigan plans to arrange Ann Arbor public school buses as a transportation option to the sites.
"Teachers are being very creative in how they can use these banners to bring awareness to diversity. We've got some who are planning fun play or picnic lunches after we talk about the banners," Margolis says. "It's a wonderful way to get the message of diversity across in a way that gets kids talking about differences, belonging, and understanding."
Jaishree Drepaul is a freelance writer and editor currently based in Ann Arbor. She can be reached at jaishreeedit@gmail.com.
Photo courtesy of Embracing Our Differences - Southeast Michigan.
View original post here:
Ocean Vuong on ‘Time Is a Mother’ and Poetry’s Power – Tricycle – Tricycle
Posted: at 1:44 am
Poet Ocean Vuong is always grieving. As an artist, he sees language as an architecture to reckon with loss, both personal and communal, and his writing is informed by his decades-long practice of death meditation. The poem is a profound death meditation, he shared with Tricycle editor-in-chief James Shaheen and meditation teacher Sharon Salzberg in a recent episode of Life As It Is. Its a place where death doesnt even have to be mentioned in order to be felt. Sometimes you can feel that death and dying haunt a work without it having to be named. His latest collection, Time Is a Mother, was written in the aftermath of his mothers death to cancer in late 2019 and offers an intimate portrait of grief, loss, and survival.
James and Sharon sat down with Vuong to discuss the immediacy of poetry, the cultural work of the refugee, and the relationship between his poetry and his Buddhist practice. Read an excerpt from their conversation below, and listen to the full episode here.
Sharon Salzberg: You grew up surrounded by storytellers, and youve spoken about how you see writing as a kind of communal exchange. Can you share more about how the styles of storytelling you encountered as a child influence your poetry?
Ocean Vuong: Absolutely. When we think of the refugee, we often think of a passive, needful, and pandering subject. Theres this perennial victimhood that is reductive to the identity of people who are very complex. For me, I like to reorient how we see refugees as people who are incredibly creative and innovative and have to make life-saving decisions not only for themselves but for the people they love. Nobody survives by accident. Survival is an innovative act. I saw that right away with the women in my family in the stories they decided to tell. They had to make decisions. The mind can only hold so much, so what do you remember? What do you leave behind? Theyre doing cultural work.
As a culture, were having discussions now of which works we should read and which works we should leave in the past. Who do we carry? Whos problematic? Which texts are harmful? Were doing this all the time as a culture, and often its in institutions and discussions and syllabuses. But I realized these women were already doing this on the boats. As they were fleeing, they were deciding: What do I give to my children, to my grandchildren? What stories do I pass on so that they can make use of? This is at the heart of civilization. We can go back to the epic poets of Gilgamesh or Homer and the Iliad. Those texts were so vital to the flourishing of our cultures because they were civic treaties about ones obligation to the community through reciprocal civic bonds. I felt the same thing happened with how the women in my family told stories. There was always a lesson. There was always a purpose. And they edited their stories down every time they told them. Looking back, I realize that I was at the heart of a master class: how my grandmother would pause over details, what details to leave in, what to gloss over, how she sped up time and slowed it down. I would learn much later in college how Faulkner and Whitman and Toni Morrison did this as well, and I realized my grandmother was doing this intuitively. And so when I look at my personal canon of creativity, the women who raised me are right up there with the Faulkners, the Joyces, the Virginia Woolfs and James Baldwins.
James Shaheen: Youve talked about the language lab and the linguistic innovation that takes place in queer communities of color. Im wondering if you can share about the role poetry plays in articulating different possible futures.
Ocean Vuong: This has always been poetrys role. Ive always felt that as long as there were soldiers, there were poets, and I think thats always true: the history of poetry is the history of displacement. Its the history of war. Its our species-wide condition. And thats why I think it can never die, regardless of how we read it. There have been conversations about the crisis of printing, but now theres Twitter poetry and Instagram poetry because its so portable. For any marginalized community, innovation often occurs through the most portable and malleable forms of art. This is true with hip-hop and how hip-hop blurs into poetry for communities of color in spoken-word traditions. Poetry can happen anywhere. It has the power to interrupt. You dont need a plot or context. You just need the self, the body. A poem can happen at any given moment. The power to be portable and interrupt is why poetry can cross so many borders and why it means so much to so many people. You can participate in it. I tell my students that to be a nurse or a doctor, you have to get a nursing degree or go to medical school for eight years, maybe a decade. But if you want to be a poet, you could do it tonight. You could do it right now. And theres an incredible exhilaration of power that the form really offers you.
People often feel frustrated with poetry because they feel like its beyond them. Were taught to plunder a text for a thesis. As soon as were in elementary school, were asked, whats the summary of this passage? Critical thinking tells us that we are outside of meaning and reading will help us enter, and then we become hunters in the text. But thats only one way of reading, and its a failure of our pedagogy because another way to read is to read a poem the way we experience weather. What is the meaning of rain? Rain doesnt have a secret. It just exists. Its the same with music. You experience music. Why do we cry listening to Bach? Theres no meaning inherent in the notes. This is also true with mantras. Theres no inherent meaning, but the intention creates a profound effect on the sonic wave and then the brain and then the emotions.
Part of my work as an educator is to undo a lot of these strict ways of reading that have been hammered into our students. When I encourage my students to read this way, they get really excited but also really nervous. Theyll say, Oh my God, what do you mean, it could be anything? And I say, Yeah, just like weather and music. Just experience it, and then you realize that theres so much pleasure. I often turn to Basho and Issa, the 17th- and 18th-century Japanese poets who were influenced by Buddhism. One of my favorite Issa poems is the haiku, Crickets on a log, floating downriver, still singing. You dont need to decode that. You can get a PhD on it if you like. Nobody will be upset. But you dont need to. Its there. To me, poetry is both rhetoric and the enactment of life as it is perceived. Its a phenomenological approach, and theres no right or wrong way to experience it.
Get Daily Dharma in your email
Start your day with a fresh perspective
Explore timeless teachings through modern methods.
With Stephen Batchelor, Sharon Salzberg, Andrew Olendzki, and more
Thank you for subscribing to Tricycle! As a nonprofit, we depend on readers like you to keep Buddhist teachings and practices widely available.
Read more from the original source:
Ocean Vuong on 'Time Is a Mother' and Poetry's Power - Tricycle - Tricycle
The art of mindfulness The Hawk Newspaper – The Hawk
Posted: at 1:44 am
It was day 18 into a month of daily mindfulness. I was running late to my 10:10 a.m. class, but I needed coffee before I tried to digest Marketing Strategy. The kettle finished and I poured the boiling water over a pile of instant coffee powder. I went to grab the mug for my first sip and I tipped it.
Scalding hot coffee raced across my desk and splashed everywhere. Normally, I would deem the day a failure, leave without coffee and mope around in a grouchy mood. This time was different. I took a deep breath, calmly cleaned up the mess and made myself a fresh cup of coffee.
According to mindful.org, about 95% of our behavior runs on autopilot. After learning this, I had refused to let the statistic apply to me.
So for the month of March, I practiced mindfulness everyday. Mindfulness can be any activity where you are fully present in whatever it is that you are doing. I practiced yoga, conscious eating, walking and meditation.
This practice was inspired by my 11:15 a.m. Mindful Communications class, taught by Aime Knight, Ph.D., associate professor of communication and media studies. Knight has been practicing mindfulness for eight years and debuted the class Mindful Communications this semester.
No matter what is happening, whether its a pandemic, or youre having a disagreement with someone or something tragic happens, [my students] have tools to be able to manage their mental state, Knight said.
In the class, we learned different ways to practice mindfulness, how to implement mindfulness in your everyday life and how to cultivate mindfulness in ourselves and our communities. Knight believed the class to be beneficial for students because of the control that mindfulness allows them to have over their emotions.
Jack McCaul 22, another student enrolled in Knights class, has been practicing mindfulness since sophomore year of high school when his dad introduced him to the teachings of zen Buddhist monk, Thich Nhat Khan.
McCaul said mindfulness changed his life by shifting his autopilot lifestyle to a more intentional mindset, where he could focus on the people and things he truly cared about.
Mindfulness allowed me to really love my friends and understand why I should tell people that I love them, how to treat people and how precious life is in general, McCaul said.
At the start of my 30 day journey, I found it hard to carve out a long chunk of time to sit on a pillow and meditate. I had put so much pressure on myself to sit through lengthy meditations, which left me feeling anxious as I constantly wondered how much time was left. It wasnt until day 12 when I realized that even a three minute meditation can help bring me back to the present moment.
By day 18 I felt like I had full control of my emotions.
Jennifer Fisher, therapist for St. Joes Counseling and Psychological Services (CAPS) and organizer of the Mindful Morning meditation series on Zoom every Wednesday, was not surprised when I told her my coffee anecdote. She said students often feel rushed in the mornings and carry that rushed feeling with them throughout the day.
Practicing mindfulness in the morning for students would be a really good skill to develop because it would help set themselves up and get into a healthy routine, Fisher said.
I agreed, as it was at the moment I spilled my coffee when I realized the true benefits of maintaining a mindfulness practice.
I could have easily reacted negatively, allowed my emotions to take control and subconsciously ruined my entire day. However, through the awareness of my thoughts and utilizing the breathing techniques I learned in Knights class, I was able to pause and choose how I wanted to react.
After concluding my 30 days of daily practice, I felt that the everyday worries that cluttered my mind were cleared. My emotions were more stable, I was more observant of my surroundings and more compassionate toward my friends.
I hope to keep mindfulness a part of my daily routine, as the practice only becomes stronger with time.
Continue reading here:
Rapid Adaptation of Deep Learning Teaches Drones to Survive Any Weather – Caltech
Posted: at 1:44 am
To be truly useful, dronesthat is, autonomous flying vehicleswill need to learn to navigate real-world weather and wind conditions.
Right now, drones are either flown under controlled conditions, with no wind, or are operated by humans using remote controls. Drones have been taught to fly in formation in the open skies, but those flights are usually conducted under ideal conditions and circumstances.
However, for drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real timerolling with the punches, meteorologically speaking.
To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.
Neural-Fly is described in a study published on May 4 in Science Robotics. The corresponding author is Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and Jet Propulsion Laboratory Research Scientist. Caltech graduate students Michael O'Connell (MS '18) and Guanya Shi are the co-first authors.
Neural-Fly was tested at Caltech's Center for Autonomous Systems and Technologies (CAST) using its Real Weather Wind Tunnel, a custom 10-foot-by-10-foot array of more than 1,200 tiny computer-controlled fans that allows engineers to simulate everything from a light gust to a gale.
"The issue is that the direct and specific effect of various wind conditions on aircraft dynamics, performance, and stability cannot be accurately characterized as a simple mathematical model," Chung says. "Rather than try to qualify and quantify each and every effect of turbulent and unpredictable wind conditions we often experience in air travel, we instead employ a combined approach of deep learning and adaptive control that allows the aircraft to learn from previous experiences and adapt to new conditions on the fly with stability and robustness guarantees."
Time-lapse photo shows a drone equipped with Neural-Fly maintaining a figure-eight course amid stiff winds at Caltech's Real Weather Wind Tunnel.
O'Connell adds: "We have many different models derived from fluid mechanics, but achieving the right model fidelity and tuning that model for each vehicle, wind condition, and operating mode is challenging. On the other hand, existing machine learning methods require huge amounts of data to train yet do not match state-of-the-art flight performance achieved using classical physics-based methods. Moreover, adapting an entire deep neural network in real time is a huge, if not currently impossible task."
Neural-Fly, the researchers say, gets around these challenges by using a so-called separation strategy, through which only a few parameters of the neural network must be updated in real time.
"This is achieved with our new meta-learning algorithm, which pre-trains the neural network so that only these key parameters need to be updated to effectively capture the changing environment," Shi says.
After obtaining as little as 12 minutes of flying data, autonomous quadrotor drones equipped with Neural-Fly learn how to respond to strong winds so well that their performance significantly improved (as measured by their ability to precisely follow a flight path). The error rate following that flight path is around 2.5 times to 4 times smaller compared to the current state of the art drones equipped with similar adaptive control algorithms that identify and respond to aerodynamic effects but without deep neural networks.
Out of the lab and into the sky: engineers test Neural-Fly in the open air on Caltech's campus
Neural-Fly, which was developed in collaboration with Caltech's Yisong Yue, Professor of Computing and Mathematical Sciences, and Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences, is based on earlier systems known as Neural-Lander and Neural-Swarm. Neural-Lander also used a deep-learning method to track the position and speed of the drone as it landed and modify its landing trajectory and rotor speed to compensate for the rotors' backwash from the ground and achieve the smoothest possible landing; Neural-Swarm taught drones to fly autonomously in close proximity to each other.
Though landing might seem more complex than flying, Neural-Fly, unlike the earlier systems, can learn in real time. As such, it can respond to changes in wind on the fly, and it does not require tweaking after the fact. Neural-Fly performed as well in flight tests conducted outside the CAST facility as it did in the wind tunnel. Further, the team has shown that flight data gathered by an individual drone can be transferred to another drone, building a pool of knowledge for autonomous vehicles.
(L to R) Guanya Shi, Soon-Jo Chung, and Michael O'Connell, in front of the wall of fans at Caltech's Center for Autonomous Systems and Technologies
At the CAST Real Weather Wind Tunnel, test drones were tasked with flying in a pre-described figure-eight pattern while they were blasted with winds up to 12.1 meters per secondroughly 27 miles per hour, or a six on the Beaufort scale of wind speeds. This is classified as a "strong breeze" in which it would be difficult to use an umbrella. It ranks just below a "moderate gale," in which it would be difficult to move and whole trees would be swaying. This wind speed is twice as fast as the speeds encountered by the drone during neural network training, which suggests Neural-Fly could extrapolate and generalize well to unseen and harsher weather.
The drones were equipped with a standard, off-the-shelf flight control computer that is commonly used by the drone research and hobbyist community. Neural-Fly was implemented in an onboard Raspberry Pi 4 computer that is the size of a credit card and retails for around $20.
The Science Robotics paper is titled "Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds." Coauthors include Anandkumar and Yue, as well as Xichen Shi (PhD '21), and former Caltech postdoc Kamyar Azizzadenesheli, now an assistant professor of computer science at Purdue University. Funding for this research came from the Defense Advanced Research Projects Agency (DARPA) and Raytheon.
The rest is here:
Rapid Adaptation of Deep Learning Teaches Drones to Survive Any Weather - Caltech
Whats the transformer machine learning model? And why should you care? – The Next Web
Posted: at 1:44 am
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. (In partnership with Paperspace)
In recent years, the transformer model has become one of the main highlights of advances in deep learning and deep neural networks. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI has used transformers to create its famous GPT-2 and GPT-3 models.
Since its debut in 2017, the transformer architecture has evolved and branched out into many different variants, expanding beyond language tasks into other areas. They have been used for time series forecasting. They are the key innovation behind AlphaFold, DeepMinds protein structure prediction model. Codex, OpenAIs source codegeneration model, is based on transformers. More recently, transformers have found their way into computer vision, where they are slowly replacing convolutional neural networks (CNN) in many complicated tasks.
Researchers are still exploring ways to improve transformers and use them in new applications. Here is a brief explainer about what makes transformers exciting and how they work.
The classic feed-forward neural network is not designed to keep track of sequential data and maps each input into an output. This works for tasks such as classifying images but fails on sequential data such as text. A machine learning model that processes text must not only compute every word but also take into consideration how words come in sequences and relate to each other. The meaning of words can change depending on other words that come before and after them in the sentence.
Before transformers, recurrent neural networks (RNN) were the go-to solution for natural language processing. When provided with a sequence of words, an RNN processes the first word and feeds back the result into the layer that processes the next word. This enables it to keep track of the entire sentence instead of processing each word separately.
Recurrent neural nets had disadvantages that limited their usefulness. First, they were very slow. Since they had to process data sequentially, they could not take advantage of parallel computing hardware and graphics processing units (GPU) in training and inference. Second, they could not handle long sequences of text. As the RNN got deeper into a text excerpt, the effects of the first words of the sentence gradually faded. This problem, known as vanishing gradients, was problematic when two linked words were very far apart in the text. And third, they only captured the relations between a word and the words that came before it. In reality, the meaning of words depends on the words that come both before and after them.
Long short-term memory (LSTM) networks, the successor to RNNs, were able to solve the vanishing gradients problem to some degree and were able to handle larger sequences of text. But LSTMs were even slower to train than RNNs and still couldnt take full advantage of parallel computing. They still relied on the serial processing of text sequences.
Transformers, introduced in the 2017 paper Attention Is All You Need, made two key contributions. First, they made it possible to process entire sequences in parallel, making it possible to scale the speed and capacity of sequential deep learning models to unprecedented rates. And second, they introduced attention mechanisms that made it possible to track the relations between words across very long text sequences in both forward and reverse directions.
Before we discuss how the transformer model works, it is worth discussing the types of problems that sequential neural networks solve.
A vector to sequence model takes a single input, such as an image, and produces a sequence of data, such as a description.
A sequence to vector model takes a sequence as input, such as a product review or a social media post, and outputs a single value, such as a sentiment score.
A sequence to sequence model takes a sequence as input, such as an English sentence, and outputs another sequence, such as the French translation of the sentence.
Despite their differences, all these types of models have one thing in common. They learn representations. The job of a neural network is to transform one type of data into another. During training, the hidden layers of the neural network (the layers that stand between the input and output) tune their parameters in a way that best represents the features of the input data type and maps it to the output.
The original transformer was designed as a sequence-to-sequence (seq2seq) model for machine translation (of course, seq2seq models are not limited to translation tasks). It is composed of an encoder module that compresses an input string from the source language into a vector that represents the words and their relations to each other. The decoder module transforms the encoded vector into a string of text in the destination language.
The input text must be processed and transformed into a unified format before being fed to the transformer. First, the text goes through a tokenizer, which breaks it down into chunks of characters that can be processed separately. The tokenization algorithm can depend on the application. In most cases, every word and punctuation mark roughly counts as one token. Some suffixes and prefixes count as separate tokens (e.g., ize, ly, and pre). The tokenizer produces a list of numbers that represent the token IDs of the input text.
The tokens are then converted into word embeddings. A word embedding is a vector that tries to capture the value of words in a multi-dimensional space. For example, the words cat and dog can have similar values across some dimensions because they are both used in sentences that are about animals and house pets. However, cat is closer to lion than wolf across some other dimension that separates felines from canids. Similarly, Paris and London might be close to each other because they are both cities. However, London is closer to England and Paris to France on a dimension that separates countries. Word embeddings usually have hundreds of dimensions.
Word embeddings are created by embedding models, which are trained separately from the transformer. There are several pre-trained embedding models that are used for language tasks.
Once the sentence is transformed into a list of word embeddings, it is fed into the transformers encoder module. Unlike RNN and LSTM models, the transformer does not receive one input at a time. It can receive an entire sentences worth of embedding values and process them in parallel. This makes transformers more compute-efficient than their predecessors and also enables them to examine the context of the text in both forward and backward sequences.
To preserve the sequential nature of the words in the sentence, the transformer applies positional encoding, which basically means that it modifies the values of each embedding vector to represent its location in the text.
Next, the input is passed to the first encoder block, which processes it through an attention layer. The attention layer tries to capture the relations between the words in the sentence. For example, consider the sentence The big black cat crossed the road after it dropped a bottle on its side. Here, the model must associate it with cat and its with bottle. Accordingly, it should establish other associations such as big and cat or crossed and cat. Otherwise put, the attention layer receives a list of word embeddings that represent the values of individual words and produces a list of vectors that represent both individual words and their relations to each other. The attention layer contains multiple attention heads, each of which can capture different kinds of relations between words.
The output of the attention layer is fed to a feed-forward neural network that transforms it into a vector representation and sends it to the next attention layer. Transformers contain several blocks of attention and feed-forward layers to gradually capture more complicated relationships.
The task of the decoder module is to translate the encoders attention vector into the output data (e.g., the translated version of the input text). During the training phase, the decoder has access both to the attention vector produced by the encoder and the expected outcome (e.g., the translated string).
The decoder uses the same tokenization, word embedding, and attention mechanism to process the expected outcome and create attention vectors. It then passes this attention vector and the attention layer in the encoder module, which establishes relations between the input and output values. In the translation application, this is the part where the words from the source and destination languages are mapped to each other. Like the encoder module, the decoder attention vector is passed through a feed-forward layer. Its result is then mapped to a very large vector which is the size of the target data (in the case of language translation, this can span across tens of thousands of words).
During training, the transformer is provided with a very large corpus of paired examples (e.g., English sentences and their corresponding French translations). The encoder module receives and processes the full input string. The decoder, however, receives a masked version of the output string, one word at a time, and tries to establish the mappings between the encoded attention vector and the expected outcome. The encoder tries to predict the next word and makes corrections based on the difference between its output and the expected outcome. This feedback enables the transformer to modify the parameters of the encoder and decoder and gradually create the right mappings between the input and output languages.
The more training data and parameters the transformer has, the more capacity it gains to maintain coherence and consistency across long sequences of text.
In the machine translation example that we examined above, the encoder module of the transformer learned the relations between English words and sentences, and the decoder learns the mappings between English and French.
But not all transformer applications require both the encoder and decoder module. For example, the GPT family of large language models uses stacks of decoder modules to generate text. BERT, another variation of the transformer model developed by researchers at Google, only uses encoder modules.
The advantage of some of these architectures is that they can be trained through self-supervised learning or unsupervised methods. BERT, for example, does much of its training by taking large corpora of unlabeled text, masking parts of it, and trying to predict the missing parts. It then tunes its parameters based on how much its predictions were close to or far from the actual data. By continuously going through this process, BERT captures the statistical relations between different words in different contexts. After this pretraining phase, BERT can be finetuned for a downstream task such as question answering, text summarization, or sentiment analysis by training it on a small number of labeled examples.
Using unsupervised and self-supervised pretraining reduces the manual effort required to annotate training data.
A lot more can be said about transformers and the new applications they are unlocking, which is out of the scope of this article. Researchers are still finding ways to squeeze more out of transformers.
Transformers have also created discussions about language understanding and artificial general intelligence. What is clear is that transformers, like other neural networks, are statistical models that capture regularities in data in clever and complicated ways. They do not understand language in the way that humans do. But they are exciting and useful nonetheless and have a lot to offer.
This article was originally written by Ben Dickson and published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.
Read more:
Whats the transformer machine learning model? And why should you care? - The Next Web
BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events – Business Wire
Posted: at 1:44 am
COLUMBIA, Md.--(BUSINESS WIRE)--BigBear.ai (NYSE: BBAI), a leader in AI-powered analytics and cyber engineering solutions, announced company executives are embarking on a thought-leadership campaign across multiple global industry events. The campaign will emphasize how the companys advancements in AI technologies will impact the federal and commercial markets in the coming months.
At these events, BigBear.ai leaders will highlight the capabilities of BigBear.ais newly acquired company, ProModel Corporation, the importance of defining responsible AI usage, and how federal and commercial organizations leverage AI and ML.
The events BigBear.ai is scheduled to address include:
CTMA Partners Meeting May 3-5, 2022: Virginia Beach, VA
Due to the rapid deployment and advancement of sensor technologies, artificial intelligence, and data science, the Department of Defense has turned to a more predictive-based approach to maintaining technology assets. The agencys recently revamped condition-based maintenance plus (CBM+) policy will accelerate the adoption, integration, and use of these emerging technologies while shifting its strategic approach from largely reactive maintenance to proactive maintenance. Participating as part of a panel session to address this trend, BigBear.ai Senior Vice President of Analytics Carl Napoletano will highlight ProModels commercial capabilities and ProModel Government Services legacy capabilities in the federal space.
DIA Future Technologies Symposium May 11-12, 2022: Virtual Event
BigBear.ais Senior Vice President of Analytics, Frank Porcelli, will brief the DIA community about BigBear.ais AI-powered solutions at this virtual presentation. After providing a high-level overview and demonstration of the companys AI products (Observe, Orient, and Dominate), Frank will also offer insights into how AI technologies are being leveraged in the federal sector.
Conference on Governance of Emerging Technologies and Science May 19-20, 2022: Phoenix, Arizona
Newly appointed BigBear.ai General Counsel Carolyn Blankenship will attend the ninth edition of Arizona States annual conference, which examines how to create sustainable governance solutions that address new technologies legal, regulatory, and policy ramifications. During her presentation, Carolyn will detail the importance of Intellectual Property (IP) law in AI and the responsible use of AI and other emerging technologies. Prior to starting as General Counsel, Carolyn organized and led Thomson Reuters cross-functional team that outlined the organizations first set of Data Ethics Principles.
Automotive Innovation Forum May 24-25, 2022: Munich, Germany
ProModel was among the select few organizations invited to attend Autodesks The Automotive Innovation Forum 2022. This premier industry event celebrates new automotive plant design and manufacturing technology solutions. Michael Jolicoeur of ProModel, Director of the Autodesk Business Division, will headline a panel at the conference and highlight the latest industry trends in automotive factory design and automation.
DAX 2022 June 4, 2022: University of Maryland, Baltimore County, Baltimore, Maryland
Three BigBear.ai experts - Zach Casper, Senior Director of Cyber; Leon Worthen, Manager of Strategic Operations; and Sammy Hamilton, Data Scientist/Engagement Engineer - will headline a panel discussion exploring the variety of ways AI and ML are deployed throughout the defense industry. The trio of experts will discuss how AI and ML solve pressing cybersecurity problems facing the Department of Defense and intelligence communities.
To connect with BigBear.ai at these events, send an email to events@bigbear.ai.
About BigBear.ai
BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ais customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, logistics, commercial space, and other sectors, rely on BigBear.ais solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai has additional locations in Virginia, Massachusetts, Michigan, and California. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.
Go here to read the rest: