Author: Sameer Ahuja

  • The Easiest Crab Cakes

    The Easiest Crab Cakes

    These easy-to-make crab cakes are filled with thick chunks of crab meat mixed together with creamy mayonnaise and sweet red bell pepper, topped off with a kick of red pepper flakes. The BEST crab cake recipe you will ever use!

    These are perfect for a quick dinner, appetizer or snack! If you love seafood as much as I do try this Cajun Popcorn Shrimp, Grilled Lemon Garlic Scallops or Walnut Crusted Maple Salmon.

    Best Crab Cake recipe

    Since you guys have been loving these salmon croquettes so much I decided to make for you the best crab cakes that you will ever eat! These seriously are SO good and simple to make. The best part, is my kids eat them not knowing they’re made with crab! The whole family will love these crab cakes. They work great as an appetizer, side dish or served as a quick snack. Whenever I make them, my family can’t get enough of their crispy texture and savory flavor.

    Not only are they absolutely delicious, but these crab cakes are so quick and easy to make as well! If you’re ever in a pinch and need to make an appetizer that everyone will love, this is definitely the recipe to try! Everyone will be raving after the first bite. Serve them up with homemade tartar sauce or chipotle mayo for some kick. Don’t blame me if these homemade crab cakes become a new obsession!

    What You Need to Make Crab Cakes

    Combine all the ingredients together for a quick bowlful of flavor! The best thing about these crab cakes is that you probably have everything to make them in your pantry right now. This makes them so easy to whip up! Check out the recipe card below for exact measurements.

    Crab Meat: Works best if it’s in lump form.Mayonnaise: Sticks everything together making a creamy and soft texture!Panko: Creates a light, crunchy breading. You can also use homemade breadcrumbs. Find my full recipe here!Flour: Helps hold everything together and will give the cakes a crispy crunch on the outside.Egg: Binds all the ingredients together.Green Onion: Adds a sweet but savory flavor and hint of texture.Red Bell Pepper: Adds fresh flavor and a bit of crunch.Worcestershire Sauce: Helps to balance out flavors.Cajun Seasoning: Adds a pop of spicy Southern flavor!Lemon: Citrus complements the flavor of these crab cakes perfectly.Vegetable Oil: This will help cook them to a crisp, golden brown.

    How to Make Crab Cakes

    It only takes 2 steps to whip up these homemade crab cakes. 2 steps!! They’re so easy but taste like they came from your favorite seafood restaurant!

    Preparing the Filling: In a large bowl add the crabmeat, mayonnaise, Panko, flour, egg, green onion, bell pepper, Worcestershire sauce, Cajun seasoning, and lemon juice. Mix together and shape into patties.Cooking the Cakes: In a large saucepan add the oil and then heat to medium-high. Add the patties to the pan and then fry until they are brown on each side for 4-5 minutes. Serve with your favorite sauce.

    Tips and Tricks

    Making homemade crab cakes is a simple and delicious way to enjoy the fresh flavor of crab meat. With just a few basic ingredients and some easy-to-follow steps, you can customize your crab cakes to preference and create the perfect dish every time!

    Which Crab Meat to Use: I recommend fresh, refrigerated, lump crane meat.Smaller Crab Cakes: For smaller cakes, divide the mixture into smaller 1 to 2 inch size circles. For mini crab cakes, divide into even smaller pieces, about 1 to 2 tablespoons for each cake. Bake at the same oven temperature. The bake time is shorter for these smaller sizes, so keep an eye on them. You’ll know the crab cakes are done when the tops and edges are lightly browned.Cooked Crab Cakes: According to the USDA’s official website, seafood is cooked through and safe to eat when the internal temperature reaches 145 degrees F.Baking Crab Cakes: If you don’t want to pan fry your crab cakes, you can also cook them in the oven! Bake for 10 minutes at 400 degrees Fahrenheit flipping once halfway through. They’re done cooking when the outsides turn a nice golden brown color.

    Storing Leftovers

    Storing crab cakes for later is super easy and convenient. This makes them the perfect make-ahead meal for busy weeknights! Simply place the cooked cakes in an airtight container and store them in the refrigerator for up to 4 days or in the freezer for 2-3 months. When you’re ready to enjoy them, simply reheat them in the oven or on the stovetop until they are hot and crispy.

    In the Refrigerator: Crab cakes are best kept in the refrigerator once they have been cooked. It is best to store them in an airtight container for 3-4 days.In the Freezer: Follow the instructions on how to prepare the crab cakes. Once ready to bake you can freeze the pre-made crab cakes in a Ziploc bag. Lay flat in the freezer for up to 1 month. If you are wanting to cook them before freezing, cook them and let cool. Once cooled, freeze them in a Ziploc bag and lay flat in the freezer for up to 3 months. Thaw before reheating for best results.Reheating: When ready to serve the cakes, cook over medium-high heat to warm up. You can also use an oven to bake them at 350 degrees F for 12 to 15 minutes or until warmed through. If they are frozen, add a few extra minutes of cooking time.

    More Delicious AppetizersAppetizers are the best part of any get-together. Not sure what to make for your next party? I’ve got you covered! All of these recipes are super easy to make and SO delicious. They’re any seafood-lover’s dream! Their big flavor will have all of your guests hooked. See my full list of appetizers here!
    DinnerHoney Garlic Butter ScallopsAppetizersShrimp Zucchini BoatsDressings, Sauces, and DipsInsanely Delicious Hot Crab DipAppetizersCopycat Bang Bang Shrimp

    Print

    The Easiest Crab Cakes
    #wprm-recipe-rating-7 .wprm-rating-star.wprm-rating-star-full svg * { fill: #343434; }#wprm-recipe-rating-7 .wprm-rating-star.wprm-rating-star-33 svg * { fill: url(#wprm-recipe-rating-7-33); }#wprm-recipe-rating-7 .wprm-rating-star.wprm-rating-star-50 svg * { fill: url(#wprm-recipe-rating-7-50); }#wprm-recipe-rating-7 .wprm-rating-star.wprm-rating-star-66 svg * { fill: url(#wprm-recipe-rating-7-66); }linearGradient#wprm-recipe-rating-7-33 stop { stop-color: #343434; }linearGradient#wprm-recipe-rating-7-50 stop { stop-color: #343434; }linearGradient#wprm-recipe-rating-7-66 stop { stop-color: #343434; }

    These easy-to-make crab cakes are filled with thick chunks of crab meat mixed together with creamy mayonnaise and sweet red bell pepper, topped off with a kick of red pepper flakes. The BEST crab cake recipe you will ever use!

    Course Appetizer, Side Dish, SnackCuisine American, Caribbean, MediterraneanKeyword crab, crab cake recipe, crab cakes, homemade crab cakes

    Prep Time 10 minutesCook Time 5 minutesTotal Time 15 minutes

    Servings 8 crab cakes
    Calories 198kcal
    Author Alyssa Rivers

    Ingredients1 pound lump crabmeat1/4 cup mayonnaise1/2 cup Panko1/4 cup flour1 egg2 Tablespoons green onion sliced1/2 red bell pepper finely diced1 Tablespoon Worcestershire sauce1 Tablespoon cajunjuice of half lemon1/4 cup vegetable oil for frying
    InstructionsIn a large bowl add the crabmeat, mayonnaise, Panko, flour, egg, green onion, bell pepper, Worcestershire sauce, Cajun seasoning, and lemon juice. Mix together and shape into patties.In a large saucepan, add the oil and heat to medium-high. Add the patties and fry until they are brown on each side for 4-5 minutes. Serve with your favorite sauce.

    NotesOriginally posted on December 29, 2019
    Updated February 1, 2023
    NutritionCalories: 198kcal | Carbohydrates: 7g | Protein: 12g | Fat: 13g | Saturated Fat: 7g | Cholesterol: 47mg | Sodium: 576mg | Potassium: 183mg | Fiber: 1g | Sugar: 1g | Vitamin A: 722IU | Vitamin C: 14mg | Calcium: 40mg | Iron: 1mg

    Powered by WPeMatico

  • Cheesecake Stuffed Strawberries

    Cheesecake Stuffed Strawberries

    Cheesecake stuffed strawberries are the perfect party treat filled with a creamy and smooth cream cheese filling. Topped with a graham cracker crumb, they are juicy, tangy, and sweet all at the same time. This easy and quick dessert is unbelievably delicious and impossible to resist!

    Is there anything better than a sweet ripe strawberry? Maybe one dipped in chocolate! For more fun strawberry recipes, you should try these Chocolate Covered Strawberries, these amazing Strawberry Brownies, and this super yummy Strawberry Cheesecake Salad.

    Strawberries Stuffed with Cheesecake

    Fresh ripe cheesecake stuffed strawberries are the perfect thing to serve for any holiday celebration! From Valentine’s Day to the 4th of July and of course Mother’s Day, these are the perfect treat! They are super simple and easy to make and the freshness of the strawberry really shines. The sweetness of the strawberry plus the tangy cheesecake filling are a match made in heaven.

    I love strawberry anything! We make a lot of strawberry desserts at my house, like this easy fresh strawberry pie and this irresistible strawberry milkshake. When berries are ripe and in season, all I can think about is cheesecake-stuffed strawberries! An entire batch of these was gone in minutes! These are a MUST try for a festive and fun dessert everyone will go crazy over! They are beautiful served on a chocolate charcuterie board or a Valentine’s Day charcuterie board too. You really can’t go wrong any way you serve them!

    Ingredients

    There are only 5 ingredients in this delicious and simple dessert. The most important thing is to choose really ripe and fresh strawberries. Other than that, there is nothing to this recipe. It is the perfect recipe to make when you don’t have a lot of time but still want a dessert to WOW your guests. Out of all of the amazing dessert recipes on my blog, this is still one of my favorites! You can find the exact measurements below in the recipe card.

    Cream Cheese: Adds a mild sweet and tangy flavor and a smooth creamy filling in the strawberry.Powdered Sugar: Sweetness!Vanilla Extract: The vanilla brings out all the other flavors and adds a rich flavor of its own as well.Strawberries: Fresh and juicy and beautifully red!Crushed Graham Cracker: The perfect crispy crumb to add a small crunch to each bite.

    Cheesecake Stuffed Strawberries Recipe

    This cheesecake stuffed strawberry recipe is so easy, you are going to want to make them all the time! The best part is the cream cheese filling if you ask me. No need to bake cheesecake! It’s a simple cream cheese mixture. They are also a great recipe to make with kids. My kids are great at helping me remove the insides of the strawberry. It’s also really fun for them to use the piping bag to fill them up with delicious cream. I can’t wait for you to try this recipe!

    Make Filling: In a stand mixer or using a hand mixer combine cream cheese, powdered sugar, and vanilla and beat until smooth and creamy. Fill the mixture in a piping bag.Prepare Strawberries: Core out each strawberry and fill with cream cheese mixture.Add Topping: Top with crushed graham cracker.Enjoy!

    Tips for Making Cheesecake Stuffed Strawberries

    Every time I make this recipe, I get so many compliments. Everyone is in love with them! You definitely won’t have any leftovers either. Here are a few tips for making these cheesecake-stuffed strawberries.

    Cutting: Use a small paring knife or melon baller to get the center out of the strawberry. This is the longest part of the process, but it’s so worth it.Filling: If you do not have a pastry bag you can put the filling in a ziploc bag and cut off the corner instead. It works the same way, and you can even put a piping tip in the bottom to get the cute design!Graham Cracker Crumb: You can make your own graham cracker crumbs by putting the graham crackers in a zip lock bag. Grab a rolling pin and hit the bag until they are a nice small crumb. You could also put them in a food processor and pulse them until fine. Dip In Chocolate: It is no surprise that these strawberries are perfect dipped in chocolate. Pick your favorite chocolate and melt it in a bowl in 15 second intervals in the microwave. Once it is nice and creamy, dip your strawberries in it and set them on parchment to firm up. They are the perfect treat!

    Storing Leftovers

    These cheesecake stuffed strawberries are so easy to make and they are perfect for any holiday. I love them because they actually store well for a few days! You don’t have to rush right before an event to have these ready. Here is how to store your leftovers.

    In the Refrigerator: Place in an airtight container for up to 3 days in the refrigerator. If you are making these ahead, store the graham cracker crumb separately and sprinkle on at the last minute since it will get less crisp as it sits.

    Other Strawberry Recipes to TryStrawberries are the perfect fruit for a dessert. They are juicy and sweet, and have a beautiful color. They taste great cooked and fresh, and they are always a crowd pleaser. If you are in the mood for a strawberry dessert, here are a few of my favorites that you have to try!
    DessertsStrawberry CobblerDessertsStrawberry TartDessertsStrawberry Pretzel SaladDessertsStrawberry Shortcake Cupcakes

    Print

    Cheesecake Stuffed Strawberries
    #wprm-recipe-rating-8 .wprm-rating-star.wprm-rating-star-full svg * { fill: #343434; }#wprm-recipe-rating-8 .wprm-rating-star.wprm-rating-star-33 svg * { fill: url(#wprm-recipe-rating-8-33); }#wprm-recipe-rating-8 .wprm-rating-star.wprm-rating-star-50 svg * { fill: url(#wprm-recipe-rating-8-50); }#wprm-recipe-rating-8 .wprm-rating-star.wprm-rating-star-66 svg * { fill: url(#wprm-recipe-rating-8-66); }linearGradient#wprm-recipe-rating-8-33 stop { stop-color: #343434; }linearGradient#wprm-recipe-rating-8-50 stop { stop-color: #343434; }linearGradient#wprm-recipe-rating-8-66 stop { stop-color: #343434; }

    Cheesecake stuffed strawberries are the perfect party treat filled with a creamy and smooth cream cheese filling. Topped with a graham cracker crumb, they are juicy, tangy, and sweet all at the same time. This easy and quick dessert is unbelievably delicious and impossible to resist!

    Course Appetizer, DessertCuisine AmericanKeyword cheesecake strawberries, cheesecake stuffed strawberries, cream cheese stuffed strawberries

    Prep Time 15 minutesCook Time 0 minutesTotal Time 15 minutes

    Servings 12 Strawberries
    Calories 33kcal
    Author Alyssa Rivers

    Ingredients1 8 ounce cream cheese softened 1/2 cup powdered sugar1 teaspoon vanilla1 pound strawberries1/4 cup crushed graham cracker for topping
    InstructionsIn a stand mixer or using a hand mixer combine cream cheese, powdered sugar, and vanilla and beat until smooth and creamy. Fill mixture in a piping bag.Core out each strawberry and fill with cream cheese mixture. Top with crushed graham cracker.

    NotesUpdated Feb 2023
    Originally Posted June 2020
    NutritionCalories: 33kcal | Carbohydrates: 8g | Protein: 1g | Fat: 1g | Saturated Fat: 1g | Cholesterol: 1mg | Sodium: 1mg | Potassium: 58mg | Fiber: 1g | Sugar: 7g | Vitamin A: 5IU | Vitamin C: 22mg | Calcium: 6mg | Iron: 1mg

    Powered by WPeMatico

  • Red Velvet Sugar Cookie Bars

    Red Velvet Sugar Cookie Bars

    These decadent red velvet sugar cookie bars with cream cheese frosting are so divine. This is the best of both worlds, all the goodness of red velvet cake in an easy-to-make cookie bar form.

    Red velvet is one of my favorite dessert flavors. It’s rich and chocolatey with the best hint of tang! If you love it as much as I do, here are a few more recipes you need to try: red velvet cheesecake, red velvet pound cake, and red velvet white chocolate chip cookies.

    Red Velvet Sugar Cookie Bars

    These could quite possibly be the best cookie bars that I have ever made. They’re thick, dense and slightly crisp on the edges. It’s a perfect cross between a cookie and dessert bar. Topped with a decadent cream cheese frosting, these are going to be you new go-to favorites. They’re everything you love about red velvet cake but in a moist, fudgy dessert bar form. Psst- they’re also a great option for Valentine’s Day!

    These bars though. They’re so moist and heavenly! The tangy flavor of the cream cheese frosting perfectly complements the sweetness of the red velvet, making every bite a burst of flavor in your mouth. These sugar cookie ebars are so good, you’ll find yourself reaching for another one before you even finish the first! Perfect for a sweet snack, a party dessert, or even as a special breakfast treat, these bars are sure to become a favorite in no time. So go ahead, give them a try and taste the yummy goodness for yourself!

    Ingredients for Red Velvet Cookie Bars and Frosting

    These red velvet sugar cookies are scrumptious and easy to make. You are going to love how fast both the cookie bars and frosting come together. If you’re looking for exact measurements, they can all be found in the recipe card below.

    Flour: All-purpose flour works great here!Unsweetened Cocoa: If you have non-alkalized (also known as Dutch process) cocoa powder, use that!Salt: Balances out the sweetness of the cookie bars.Baking Powder: A necessary rising agent.Butter: Bring the butter to room temperature before using it. This way, you will end up with a smoother batter.Sugar: Just regular granulated sugar works fine!Eggs: Give the cookies texture and lift.Vanilla: Adds a little extra flavor. Use pure vanilla extract if you can!Red Food Coloring: For that signature red color.

    Cream Cheese Frosting

    Cream Cheese: Make sure your cream cheese is at room temperature so your frosting ends up nice and smooth.Butter: Should also be at room temperature.Powdered Sugar: For thickness and sweetness.Vanilla: Adds an extra pop of flavor!

    How to Make Red Velvet Sugar Cookie Bars

    As easy as cookies but faster and more hands-off. These will be a winner when you’re short on time, but need big flavor and big wow factor. Red velvet makes everything just a bit more special! These sugar cookie bars almost look too good to eat.

    Preheat Oven, Prep Pan: Preheat oven to 350 degrees. Have a 9×13 inch pan ready. I like to line mine with aluminum foil or parchment paper and spray it with cooking spray so that the bars easily lift out and are easy to cut.Mix Dry Ingredients: In a medium bowl, whisk together flour, cocoa, salt, and baking powder. Then set aside.Mix Wet Ingredients: In a mixing bowl, cream together 1 cup butter softened and sugar until light and creamy. About 2-3 minutes. Beat in the eggs, vanilla, and food coloring until combined.Combine and Bake: Add the flour mixture until a soft dough forms. Press into the bottom of the 9×13 inch pan. Bake for about 20 minutes until the edges start to pull away from the sides and a toothpick entered into the center comes out clean. Allow to completely cool before frosting.Prepare Frosting and Enjoy: To make the cream cheese frosting, Beat together the cream cheese and butter. Add the powdered sugar and vanilla. Beat together until smooth. Frost the top of the bars and enjoy!

    Tips and Tricks

    These heavenly red velvet sugar cookie bars are a cinch to make. Here are a few extra tips to keep in mind so they turn out perfectly!

    Prep Your Pan: Line your pan with aluminum foil and spray it with cooking spray so that the bars easily lift out and are easy to cut.Room Temperature Ingredients: The butter and cream cheese should be brought to room temperature naturally, not in the microwave. This will ensure the creamiest texture. The butter and cream cheese will incorporate more evenly.Batter Thickness: Don’t panic if the batter is thick, it’s definitely a cookie batter, not a cake batter.Press: When you put the cookie bars into the pan use a spatula to press down evenly. It will bake up nice and chewy and amazing!Add Toppings: Use your imagination to decorate these red velvet sugar cookie bars! Use sprinkles and candies to make them festive, for whatever holiday or occasion you’re making them for! You can also add a drizzle of chocolate or white chocolate sauce for extra decadence.

    Storing Leftovers

    Red velvet sugar cookie bars with cream cheese frosting will last for 3-5 days if stored properly. To keep them fresh, it’s best to store leftovers in an airtight container in the refrigerator. When you’re ready to eat them, simply take the bars out of the refrigerator a few minutes before serving to allow them to reach room temperature. This way, they will be nice and soft and taste as good as they did on the first day!

    More Red Velvet DessertsNot only is red velvet a fantastic dessert flavor, but it’s perfect for Valentine’s Day with its deep red coloring! With Valentine’s Day just around the corner, here are a few more delicious treats that you have to add to the dessert lineup! They’re all rich, decadent, and are sure to satisfy any sweet tooth. Enjoy!
    DessertsThe BEST Red Velvet Cupcakes with Cream Cheese FrostingDessertsRed Velvet Brownies with Cream Cheese FrostingDessertsRed Velvet Thumbprint Cookies with Cream Cheese FillingDessertsRed Velvet Cake Recipe

    Print

    Red Velvet Sugar Cookie Bars

    These decadent red velvet sugar cookie bars with cream cheese frosting are so divine. This is the best of both worlds, all the goodness of red velvet cake in an easy-to-make cookie bar form.

    Course DessertCuisine AmericanKeyword cookie bar recipes, red velvet cookies, red velvet sugar cookie bars

    Prep Time 15 minutesCook Time 20 minutesTotal Time 35 minutes

    Servings 16 Bars
    Calories 452kcal
    Author Alyssa Rivers

    Ingredients2 1/2 cups flour1/4 cup unsweet­ened cocoa1/2 teaspoon salt1 teaspoon bak­ing pow­der1 cup but­ter softened1 1/2 cups sugar2 eggs2 teaspoons vanilla extract2 Tablespoons red food coloringCream Cheese Frosting:8 ounces cream cheese softened1/2 cup butter softened2 cups sifted powdered sugar1 teaspoon vanilla
    InstructionsPreheat oven to 350 degrees. Have a 9×13 inch pan ready. I like to line mine with aluminum foil or parchment paper and spray it with cooking spray so that the bars easily lift out and are easy to cut.In a medium bowl, whisk together flour, cocoa, salt, and baking powder. Set aside.In a mixing bowl, cream together 1 cup butter softened and sugar until light and creamy. About 2-3 minutes. Beat in the eggs, vanilla, and food coloring until combined.Add the flour mixture until a soft dough forms. Press into the bottom of the 9×13 inch pan. Bake for about 20 minutes until the edges start to pull away from the sides and a toothpick entered into the center comes out clean. Allow to completely cool before frosting.To make the cream cheese frosting, Beat together the cream cheese and butter. Add the powdered sugar and vanilla. Beat together until smooth. Frost the top of the bars and enjoy!

    NotesOriginally posted January 26, 2014
    Updated on January 31, 2023
    NutritionCalories: 452kcal | Carbohydrates: 55g | Protein: 5g | Fat: 24g | Saturated Fat: 15g | Cholesterol: 86mg | Sodium: 291mg | Potassium: 109mg | Fiber: 1g | Sugar: 34g | Vitamin A: 799IU | Calcium: 42mg | Iron: 2mg

    Powered by WPeMatico

  • Define a custom session duration and terminate active sessions in IAM Identity Center

    Define a custom session duration and terminate active sessions in IAM Identity Center

    Managing access to accounts and applications requires a balance between delivering simple, convenient access and managing the risks associated with active user sessions. Based on your organization’s needs, you might want to make it simple for end users to sign in and to operate long enough to get their work done, without the disruptions associated with requiring re-authentication. You might also consider shortening the session to help meet your compliance or security requirements. At the same time, you might want to terminate active sessions that your users don’t need, such as sessions for former employees, sessions for which the user failed to sign out on a second device, or sessions with suspicious activity.
    With AWS IAM Identity Center (successor to AWS Single Sign-On), you now have the option to configure the appropriate session duration for your organization’s needs while using new session management capabilities to look up active user sessions and revoke unwanted sessions.
    In this blog post, I show you how to use these new features in IAM Identity Center. First, I walk you through how to configure the session duration for your IAM Identity Center users. Then I show you how to identify existing active sessions and terminate them.
    What is IAM Identity Center?
    IAM Identity Center helps you securely create or connect your workforce identities and manage their access centrally across AWS accounts and applications. IAM Identity Center is the recommended approach for workforce identities to access AWS resources. In IAM Identity Center, you can integrate with an external identity provider (IdP), such as Okta Universal Directory, Microsoft Azure Active Directory, or Microsoft Active Directory Domain Services, as an identity source or you can create users directly in IAM Identity Center. The service is built on the capabilities of AWS Identity and Access Management (IAM) and is offered at no additional cost.
    IAM Identity Center sign-in and sessions
    You can use IAM Identity Center to access applications and accounts and to get credentials for the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDK sessions. When you log in to IAM Identity Center through a browser or the AWS CLI, an AWS access portal session is created. When you federate into the console, IAM Identity Center uses the session duration setting on the permission set to control the duration of the session.

    Note: The access portal session duration for IAM Identity Center differs from the IAM permission set session duration, which defines how long a user can access their account through the IAM Identity Center console.

    Before the release of the new session management feature, the AWS access portal session duration was fixed at 8 hours. Now you can configure the session duration for the AWS access portal in IAM Identity Center from 15 minutes to 7 days. The access portal session duration determines how long the user can access the portal, applications, and accounts, and run CLI commands without re-authenticating. If you have an external IdP connected to IAM Identity Center, the access portal session duration will be the lesser of either the session duration that you set in your IdP or the session duration defined in IAM Identity Center. Users can access accounts and applications until the access portal session expires and initiates re-authentication.
    When users access accounts or applications through IAM Identity Center, it creates an additional session that is separate but related to the AWS access portal session. AWS CLI sessions use the AWS access portal session to access roles. The duration of console sessions is defined as part of the permission set that the user accessed. When a console session starts, it continues until the duration expires or the user ends the session. IAM Identity Center-enabled application sessions re-verify the AWS access portal session approximately every 60 minutes. These sessions continue until the AWS access portal session terminates, until another application-specific condition terminates the session, or until the user terminates the session.
    To summarize:

    After a user signs in to IAM Identity Center, they can access their assigned roles and applications for a fixed period, after which they must re-authenticate.
    If a user accesses an assigned permission set, the user has access to the corresponding role for the duration defined in the permission set (or by the user terminating the session).
    The AWS CLI uses the AWS access portal session to access roles. The AWS CLI refreshes the IAM permission set in the background. The CLI job continues to run until the access portal session expires.
    If users access an IAM Identity Center-enabled application, the user can retain access to an application for up to an hour after the access portal session has expired.

    Note: IAM Identity Center doesn’t currently support session management capabilities for Active Directory identity sources.

    For more information about session management features, see Authentication sessions in the documentation.
    Configure session duration
    In this section, I show you how to configure the session duration for the AWS access portal in IAM Identity Center. You can choose a session duration between 15 minutes and 7 days.
    Session duration is a global setting in IAM Identity Center. After you set the session duration, the maximum session duration applies to IAM Identity Center users.
    To configure session duration for the AWS access portal:

    Open the IAM Identity Center console.
    In the left navigation pane, choose Settings.
    On the Settings page, choose the Authentication tab.
    Under Authentication, next to Session settings, choose Configure.
    For Configure session settings, choose a maximum session duration from the list of pre-defined session durations in the dropdown. To set a custom session duration, select Custom duration, enter the length for the session in minutes, and then choose Save.

    Figure 1: Set access portal session duration

    Congratulations! You have just modified the session duration for your users. This new duration will take effect on each user’s next sign-in.
    Find and terminate AWS access portal sessions
    With this new release, you can find active portal sessions for your IAM Identity Center users, and if needed, you can terminate the sessions. This can be useful in situations such as the following:

    A user no longer works for your organization or was removed from projects that gave them access to applications or permission sets that they should no longer use.
    If a device is lost or stolen, the user can contact you to end the session. This reduces the risk that someone will access the device and use the open session.

    In these cases, you can find a user’s active sessions in the AWS access portal, select the session that you’re interested in, and terminate it. Depending on the situation, you might also want to deactivate sign-in for the user from the system before revoking the user’s session. You can deactivate sign-in for users in the IAM Identity Center console or in your third-party IdP.
    If you first deactivate the user’s sign-in in your IdP, and then deactivate the user’s sign-in in IAM Identity Center, deactivation will take effect in IAM Identity Center without synchronization latency. However, if you deactivate the user in IAM Identity Center first, then it is possible that the IdP could activate the user again. By first deactivating the user’s sign-in in your IdP, you can prevent the user from signing in again when you revoke their session. This action is advisable when a user has left your organization and should no longer have access, or if you suspect a valid user’s credentials were stolen and you want to block access until you reset the user’s passwords.
    Termination of the access portal session does not affect the active permission set session started from the access portal. IAM role session duration when assumed from the access portal will last as long as the duration specified in the permission set. For AWS CLI sessions, it can take up to an hour for the CLI to terminate after the access portal session is terminated.

    Tip: Activate multi-factor authentication (MFA) wherever possible. MFA offers an additional layer of protection to help prevent unauthorized individuals from gaining access to systems or data.

    To manage active access portal sessions in the AWS access portal:

    Open the IAM Identity Center console.
    In the left navigation pane, choose Users.
    On the Users page, choose the username of the user whose sessions you want to manage. This takes you to a page with the user’s information.
    On the user’s page, choose the Active sessions tab. The number in parentheses next to Active sessions indicates the number of current active sessions for this user.

    Figure 2: View active access portal sessions

    Select the sessions that you want to delete, and then choose Delete session. A dialog box appears that confirms you’re deleting active sessions for this user.

    Figure 3: Delete selected active sessions

    Review the information in the dialog box, and if you want to continue, choose Delete session.

    Conclusion
    In this blog post, you learned how IAM Identity Center manages sessions, how to modify the session duration for the AWS access portal, and how to view, search, and terminate active access portal sessions. I also shared some tips on how to think about the appropriate session duration for your use case and related steps that you should take when terminating sessions for users who shouldn’t have permission to sign in again after their session has ended.
    With this new feature, you now have more control over user session management. You can use the console to set configurable session lengths based on your organization’s security requirements and desired end-user experience, and you can also terminate sessions, enabling you to manage sessions that are no longer needed or potentially suspicious.
    To learn more, see Manage IAM Identity Center integrated application sessions.
     If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
    Want more AWS Security news? Follow us on Twitter.

    Ron Cully
    Ron is a Principal Product Manager at AWS where he has led feature and roadmap planning for workforce identity products for over 6 years. He has over 25 years of experience leading networking and directory related product delivery. Ron is passionate about delivering solutions to help make it easier for you to migrate identity-aware workloads, simplify resource and application authorization, and give people a simple sign-in and access experience in the cloud.

    Palak Arora
    Palak is a Senior Product Manager at AWS Identity. She has over eight years of cyber security experience with specialization in Identity and Access Management (IAM) domain. She has helped various customers across different sectors to define their enterprise and customer IAM roadmap and strategy, and improve the overall technology risk landscape.

    Powered by WPeMatico

  • How to set up ongoing replication from your third-party secrets manager to AWS Secrets Manager

    How to set up ongoing replication from your third-party secrets manager to AWS Secrets Manager

    Secrets managers are a great tool to securely store your secrets and provide access to secret material to a set of individuals, applications, or systems that you trust. Across your environments, you might have multiple secrets managers hosted on different providers, which can increase the complexity of maintaining a consistent operating model for your secrets. In these situations, centralizing your secrets in a single source of truth, and replicating subsets of secrets across your other secrets managers, can simplify your operating model.
    This blog post explains how you can use your third-party secrets manager as the source of truth for your secrets, while replicating a subset of these secrets to AWS Secrets Manager. By doing this, you will be able to use secrets that originate and are managed from your third-party secrets manager in Amazon Web Services (AWS) applications or in AWS services that use Secrets Manager secrets.
    I’ll demonstrate this approach in this post by setting up a sample open-source HashiCorp Vault to create and maintain secrets and create a replication mechanism that enables you to use these secrets in AWS by using AWS Secrets Manager. Although this post uses HashiCorp Vault as an example, you can also modify the replication mechanism to use secrets managers from other providers.

    Important: This blog post is intended to provide guidance that you can use when planning and implementing a secrets replication mechanism. The examples in this post are not intended to be run directly in production, and you will need to take security hardening requirements into consideration before deploying this solution. As an example, HashiCorp provides tutorials on hardening production vaults.

    You can use these links to navigate through this post:

    Why and when to consider replicating secrets Two approaches to secrets replication Replicate secrets to AWS Secrets Manager with the pull model Solution overview Set up the solution Step 1: Deploy the solution by using the AWS CDK toolkit Step 2: Initialize the HashiCorp Vault Step 3: Update the Vault connection secret Step 4: (Optional) Set up email notifications for replication failures Test your secret replication Update a secret Secret replication logic Use your secret Manage permissions Options for customizing the sample solution
    Why and when to consider replicating secrets
    The primary use case for this post is for customers who are running applications on AWS and are currently using a third-party secrets manager to manage their secrets, hosted on-premises, in the AWS Cloud, or with a third-party provider. These customers typically have existing secrets vending processes, deployment pipelines, and procedures and processes around the management of these secrets. Customers with such a setup might want to keep their existing third-party secrets manager and have a set of secrets that are accessible to workloads running outside of AWS, as well as workloads running within AWS, by using AWS Secrets Manager.
    Another use case is for customers who are in the process of migrating workloads to the AWS Cloud and want to maintain a (temporary) hybrid form of secrets management. By replicating secrets from an existing third-party secrets manager, customers can migrate their secrets to the AWS Cloud one-by-one, test that they work, integrate the secrets with the intended applications and systems, and once the migration is complete, remove the third-party secrets manager.
    Additionally, some AWS services, such as Amazon Relational Database Service (Amazon RDS) Proxy, AWS Direct Connect MACsec, and AD Connector seamless join (Linux), only support secrets from AWS Secrets Manager. Customers can use secret replication if they have a third-party secrets manager and want to be able to use third-party secrets in services that require integration with AWS Secrets Manager. That way, customers don’t have to manage secrets in two places.
    Two approaches to secrets replication
    In this post, I’ll discuss two main models to replicate secrets from an external third-party secrets manager to AWS Secrets Manager: a pull model and a push model.
    Pull model In a pull model, you can use AWS services such as Amazon EventBridge and AWS Lambda to periodically call your external secrets manager to fetch secrets and updates to those secrets. The main benefit of this model is that it doesn’t require any major configuration to your third-party secrets manager. The AWS resources and mechanism used for pulling secrets must have appropriate permissions and network access to those secrets. However, there could be a delay between the time a secret is created and updated and when it’s picked up for replication, depending on the time interval configured between pulls from AWS to the external secrets manager.
    Push model In this model, rather than periodically polling for updates, the external secrets manager pushes updates to AWS Secrets Manager as soon as a secret is added or changed. The main benefit of this is that there is minimal delay between secret creation, or secret updating, and when that data is available in AWS Secrets Manager. The push model also minimizes the network traffic required for replication since it’s a unidirectional flow. However, this model adds a layer of complexity to the replication, because it requires additional configuration in the third-party secrets manager. More specifically, the push model is dependent on the third-party secrets manager’s ability to run event-based push integrations with AWS resources. This will require a custom integration to be developed and managed on the third-party secrets manager’s side.
    This blog post focuses on the pull model to provide an example integration that requires no additional configuration on the third-party secrets manager.
    Replicate secrets to AWS Secrets Manager with the pull model
    In this section, I’ll walk through an example of how to use the pull model to replicate your secrets from an external secrets manager to AWS Secrets Manager.
    Solution overview

    Figure 1: Secret replication architecture diagram

    The architecture shown in Figure 1 consists of the following main steps, numbered in the diagram:

    A Cron expression in Amazon EventBridge invokes an AWS Lambda function every 30 minutes.
    To connect to the third-party secrets manager, the Lambda function, written in NodeJS, fetches a set of user-defined API keys belonging to the secrets manager from AWS Secrets Manager. These API keys have been scoped down to give read-only access to secrets that should be replicated, to adhere to the principle of least privilege. There is more information on this in Step 3: Update the Vault connection secret.
    The third step has two variants depending on where your third-party secrets manager is hosted:

    The Lambda function is configured to fetch secrets from a third-party secrets manager that is hosted outside AWS. This requires sufficient networking and routing to allow communication from the Lambda function.

    Note: Depending on the location of your third-party secrets manager, you might have to consider different networking topologies. For example, you might need to set up hybrid connectivity between your external environment and the AWS Cloud by using AWS Site-to-Site VPN or AWS Direct Connect, or both.

    The Lambda function is configured to fetch secrets from a third-party secrets manager running on Amazon Elastic Compute Cloud (Amazon EC2).

    Important: To simplify the deployment of this example integration, I’ll use a secrets manager hosted on a publicly available Amazon EC2 instance within the same VPC as the Lambda function (3b). This minimizes the additional networking components required to interact with the secrets manager. More specifically, the EC2 instance runs an open-source HashiCorp Vault. In the rest of this post, I’ll refer to the HashiCorp Vault’s API keys as Vault tokens.

    The Lambda function compares the version of the secret that it just fetched from the third-party secrets manager against the version of the secret that it has in AWS Secrets Manager (by tag). The function will create a new secret in AWS Secrets Manager if the secret does not exist yet, and will update it if there is a new version. The Lambda function will only consider secrets from the third-party secrets manager for replication if they match a specified prefix. For example, hybrid-aws-secrets/.
    In case there is an error synchronizing the secret, an email notification is sent to the email addresses which are subscribed to the Amazon Simple Notification Service (Amazon SNS) Topic deployed. This sample application uses email notifications with Amazon SNS as an example, but you could also integrate with services like ServiceNow, Jira, Slack, or PagerDuty. Learn more about how to use webhooks to publish Amazon SNS messages to external services.

    Set up the solution
    In this section, I walk through deploying the pull model solution displayed in Figure 1 using the following steps: Step 1: Deploy the solution by using the AWS CDK toolkit Step 2: Initialize the HashiCorp Vault Step 3: Update the Vault connection secret Step 4: (Optional) Set up email notifications for replication failures
    Step 1: Deploy the solution by using the AWS CDK toolkit
    For this blog post, I’ve created an AWS Cloud Development Kit (AWS CDK) script, which can be found in this AWS GitHub repository. Using the AWS CDK, I’ve defined the infrastructure depicted in Figure 1 as Infrastructure as Code (IaC), written in TypeScript, ready for you to deploy and try out. The AWS CDK is an open-source software development framework that allows you to write your cloud application infrastructure as code using common programming languages such as TypeScript, Python, Java, Go, and so on.
    Prerequisites:
    To deploy the solution, the following should be in place on your system:

    Git
    Node (version 16 or higher)
    jq
    AWS CDK Toolkit. Install using npm (included in Node setup) by running npm install -g aws-cdk in a local terminal.
    An AWS access key ID and secret access key configured as this setup will interact with your AWS account. See Configuration basics in the AWS Command Line Interface User Guide for more details.
    Docker installed and running on your machine

    To deploy the solution

    Clone the CDK script for secret replication.git clone https://github.com/aws-samples/aws-secrets-manager-hybrid-secret-replication-from-hashicorp-vault.git SecretReplication
    Use the cloned project as the working directory.cd SecretReplication
    Install the required dependencies to deploy the application.npm install
    Adjust any configuration values for your setup in the cdk.json file. For example, you can adjust the secretsPrefix value to change which prefix is used by the Lambda function to determine the subset of secrets that should be replicated from the third-party secrets manager.
    Bootstrap your AWS environments with some resources that are required to deploy the solution. With correctly configured AWS credentials, run the following command.cdk bootstrap The core resources created by bootstrapping are an Amazon Elastic Container Registry (Amazon ECR) repository for the AWS Lambda Docker image, an Amazon Simple Storage Service (Amazon S3) bucket for static assets, and AWS Identity and Access Management (IAM) roles with corresponding IAM policies. You can find a full list of the resources by going to the CDKToolkit stack in AWS CloudFormation after the command has finished.
    Deploy the infrastructure.cdk deploy This command deploys the infrastructure shown in Figure 1 for you by using AWS CloudFormation. For a full list of resources, you can view the SecretsManagerReplicationStack in AWS CloudFormation after the deployment has completed.

    Note: If your local environment does not have a terminal that allows you to run these commands, consider using AWS Cloud9 or AWS CloudShell.

    After the deployment has finished, you should see an output in your terminal that looks like the one shown in Figure 2. If successful, the output provides the IP address of the sample HashiCorp Vault and its web interface.

    Figure 2: AWS CDK deployment output

    Step 2: Initialize the HashiCorp Vault
    As part of the output of the deployment script, you will be given a URL to access the user interface of the open-source HashiCorp Vault. To simplify accessibility, the URL points to a publicly available Amazon EC2 instance running the HashiCorp Vault user interface as shown in step 3b in Figure 1.
    Let’s look at the HashiCorp Vault that was just created. Go to the URL in your browser, and you should see the Raft Storage initialize page, as shown in Figure 3.

    Figure 3: HashiCorp Vault Raft Storage initialize page

    The vault requires an initial configuration to set up storage and get the initial set of root keys. You can go through the steps manually in the HashiCorp Vault’s user interface, but I recommend that you use the initialise_vault.sh script that is included as part of the SecretsManagerReplication project instead.
    Using the HashiCorp Vault API, the initialization script will automatically do the following:

    Initialize the Raft storage to allow the Vault to store secrets locally on the instance.
    Create an initial set of unseal keys for the Vault. Importantly, for demo purposes, the script uses a single key share. For production environments, it’s recommended to use multiple key shares so that multiple shares are needed to reconstruct the root key, in case of an emergency.
    Store the unseal keys in init/vault_init_output.json in your project.
    Unseals the HashiCorp Vault by using the unseal keys generated earlier.
    Enables two key-value secrets engines:

    An engine named after the prefix that you’re using for replication, defined in the cdk.json file. In this example, this is hybrid-aws-secrets. We’re going to use the secrets in this engine for replication to AWS Secrets Manager.
    An engine called super-secret-engine, which you’re going to use to show that your replication mechanism does not have access to secrets outside the engine used for replication.

    Creates three example secrets, two in hybrid-aws-secrets, and one in super-secret-engine.
    Creates a read-only policy, which you can see in the init/replication-policy-payload.json file after the script has finished running, that allows read-only access to only the secrets that should be replicated.
    Creates a new vault token that has the read-only policy attached so that it can be used by the AWS Lambda function later on to fetch secrets for replication.

    To run the initialization script, go back to your terminal, and run the following command. ./initialise_vault.sh
    The script will then ask you for the IP address of your HashiCorp Vault. Provide the IP address (excluding the port) and choose Enter. Input y so that the script creates a couple of sample secrets.
    If everything is successful, you should see an output that includes tokens to access your HashiCorp Vault, similar to that shown in Figure 4.

    Figure 4: Initialize HashiCorp Vault bash script output

    The setup script has outputted two tokens: one root token that you will use for administrator tasks, and a read-only token that will be used to read secret information for replication. Make sure that you can access these tokens while you’re following the rest of the steps in this post.

    Note: The root token is only used for demonstration purposes in this post. In your production environments, you should not use root tokens for regular administrator actions. Instead, you should use scoped down roles depending on your organizational needs. In this case, the root token is used to highlight that there are secrets under super-secret-engine/ which are not meant for replication. These secrets cannot be seen, or accessed, by the read-only token.

    Go back to your browser and refresh your HashiCorp Vault UI. You should now see the Sign in to Vault page. Sign in using the Token method, and use the root token. If you don’t have the root token in your terminal anymore, you can find it in the init/vault_init_output.json file.
    After you sign in, you should see the overview page with three secrets engines enabled for you, as shown in Figure 5.

    Figure 5: HashiCorp Vault secrets engines overview

    If you explore hybrid-aws-secrets and super-secret-engine, you can see the secrets that were automatically created by the initialization script. For example, first-secret-for-replication, which contains a sample key-value secret with the key secrets and value manager.
    If you navigate to Policies in the top navigation bar, you can also see the aws-replication-read-only policy, as shown in Figure 6. This policy provides read-only access to only the hybrid-aws-secrets path.

    Figure 6: Read-only HashiCorp Vault token policy

    The read-only policy is attached to the read-only token that we’re going to use in the secret replication Lambda function. This policy is important because it scopes down the access that the Lambda function obtains by using the token to a specific prefix meant for replication. For secret replication we only need to perform read operations. This policy ensures that we can read, but cannot add, alter, or delete any secrets in HashiCorp Vault using the token.
    You can verify the read-only token permissions by signing into the HashiCorp Vault user interface using the read-only token rather than the root token. Now, you should only see hybrid-aws-secrets. You no longer have access to super-secret-engine, which you saw in Figure 5. If you try to create or update a secret, you will get a permission denied error.
    Great! Your HashiCorp Vault is now ready to have its secrets replicated from hybrid-aws-secrets to AWS Secrets Manager. The next section describes a final configuration that you need to do to allow access to the secrets in HashiCorp Vault by the replication mechanism in AWS.
    Step 3: Update the Vault connection secret
    To allow secret replication, you must give the AWS Lambda function access to the HashiCorp Vault read-only token that was created by the initialization script. To do that, you need to update the vault-connection-secret that was initialized in AWS Secrets Manager as part of your AWS CDK deployment.
    For demonstration purposes, I’ll show you how to do that by using the AWS Management Console, but you can also do it programmatically by using the AWS Command Line Interface (AWS CLI) or AWS SDK with the update-secret command.
    To update the Vault connection secret (console)

    In the AWS Management Console, go to AWS Secrets Manager > Secrets > hybrid-aws-secrets/vault-connection-secret.
    Under Secret Value, choose Retrieve Secret Value, and then choose Edit.
    Update the vaultToken value to contain the read-only token that was generated by the initialization script.

    Figure 7: AWS Secrets Manager – Vault connection secret page

    Step 4: (Optional) Set up email notifications for replication failures
    As highlighted in Figure 1, the Lambda function will send an email by using Amazon SNS to a designated email address whenever one or more secrets fails to be replicated. You will need to configure the solution to use the correct email address. To do this, go to the cdk.json file at the root of the SecretReplication folder and adjust the notificationEmail parameter to an email address that you own. Once done, deploy the changes using the cdk deploy command. Within a few minutes, you’ll get an email requesting you to confirm the subscription. Going forward, you will receive an email notification if one or more secrets fails to replicate.
    Test your secret replication
    You can either wait up to 30 minutes for the Lambda function to be invoked automatically to replicate the secrets, or you can manually invoke the function.
    To test your secret replication

    Open the AWS Lambda console and find the Secret Replication function (the name starts with SecretsManagerReplication-SecretReplication).
    Navigate to the Test tab.
    For the text event action, select Create new event, create an event using the default parameters, and then choose the Test button on the right-hand side, as shown in Figure 8.

    Figure 8: AWS Lambda – Test page to manually invoke the function

    This will run the function. You should see a success message, as shown in Figure 9. If this is the first time the Lambda function has been invoked, you will see in the results that two secrets have been created.

    Figure 9: AWS Lambda function output

    You can find the corresponding logs for the Lambda function invocation in a Log group in AWS CloudWatch matching the name /aws/lambda/SecretsManagerReplication-SecretReplicationLambdaF-XXXX.
    To verify that the secrets were added, navigate to AWS Secrets Manager in the console, and in addition to the vault-connection-secret that you edited before, you should now also see the two new secrets with the same hybrid-aws-secrets prefix, as shown in Figure 10.

    Figure 10: AWS Secrets Manager overview – New replicated secrets

    For example, if you look at first-secret-for-replication, you can see the first version of the secret, with the secret key secrets and secret value manager, as shown in Figure 11.

    Figure 11: AWS Secrets Manager – New secret overview showing values and version number

    Success! You now have access to the secret values that originate from HashiCorp Vault in AWS Secrets Manager. Also, notice how there is a version tag attached to the secret. This is something that is necessary to update the secret, which you will learn more about in the next two sections.
    Update a secret
    It’s a recommended security practice to rotate secrets frequently. The Lambda function in this solution not only replicates secrets when they are created — it also periodically checks if existing secrets in AWS Secrets Manager should be updated when the third-party secrets manager (HashiCorp Vault in this case) has a new version of the secret. To validate that this works, you can manually update a secret in your HashiCorp Vault and observe its replication in AWS Secrets Manager in the same way as described in the previous section. You will notice that the version tag of your secret gets updated automatically when there is a new secret replication from the third-party secrets manager to AWS Secrets Manager.
    Secret replication logic
    This section will explain in more detail the logic behind the secret replication. Consider the following sequence diagram, which explains the overall logic implemented in the Lambda function.

    Figure 12: State diagram for secret replication logic

    This diagram highlights that the Lambda function will first fetch a list of secret names from the HashiCorp Vault. Then, the function will get a list of secrets from AWS Secrets Manager, matching the prefix that was configured for replication. AWS Secrets Manager will return a list of the secrets that match this prefix and will also return their metadata and tags. Note that the function has not fetched any secret material yet.
    Next, the function will loop through each secret name that HashiCorp Vault gave and will check if the secret exists in AWS Secrets Manager:

    If there is no secret that matches that name, the function will fetch the secret material from HashiCorp Vault, including the version number, and create a new secret in AWS Secrets Manager. It will also add a version tag to the secret to match the version.
    If there is a secret matching that name in AWS Secrets Manager already, the Lambda function will first fetch the metadata from that secret in HashiCorp Vault. This is required to get the version number of the secret, because the version number was not exposed when the function got the list of secrets from HashiCorp Vault initially. If the secret version from HashiCorp Vault does not match the version value of the secret in AWS Secrets Manager (for example, the version in HashiCorp vault is 2, and the version in AWS Secrets manager is 1), an update is required to get the values synchronized again. Only now will the Lambda function fetch the actual secret material from HashiCorp Vault and update the secret in AWS Secrets Manager, including the version number in the tag.

    The Lambda function fetches metadata about the secrets, rather than just fetching the secret material from HashiCorp Vault straight away. Typically, secrets don’t update very often. If this Lambda function is called every 30 minutes, then it should not have to add or update any secrets in the majority of invocations. By using metadata to determine whether you need the secret material to create or update secrets, you minimize the number of times secret material is fetched both from HashiCorp Vault and AWS Secrets Manager.

    Note: The AWS Lambda function has permissions to pull certain secrets from HashiCorp Vault. It is important to thoroughly review the Lambda code and any subsequent changes to it to prevent leakage of secrets. For example, you should ensure that the Lambda function does not get updated with code that unintentionally logs secret material outside the Lambda function.

    Use your secret
    Now that you have created and replicated your secrets, you can use them in your AWS applications or AWS services that are integrated with Secrets Manager. For example, you can use the secrets when you set up connectivity for a proxy in Amazon RDS, as follows.
    To use a secret when creating a proxy in Amazon RDS

    Go to the Amazon RDS service in the console.
    In the left navigation pane, choose Proxies, and then choose Create Proxy.
    On the Connectivity tab, you can now select first-secret-for-replication or second-secret-for-replication, which were created by the Lambda function after replicating them from the HashiCorp Vault.

    Figure 13: Amazon RDS Proxy – Example of using replicated AWS Secrets Manager secrets

    It is important to remember that the consumers of the replicated secrets in AWS Secrets Manager will require scoped-down IAM permissions to use the secrets and AWS Key Management Service (AWS KMS) keys that were used to encrypt the secrets. For example, see Step 3: Create IAM role and policy on the Set up shared database connections with Amazon RDS Proxy page.
    Manage permissions
    Due to the sensitive nature of the secrets, it is important that you scope down the permissions to the least amount required to prevent inadvertent access to your secrets. The setup adopts a least-privilege permission strategy, where only the necessary actions are explicitly allowed on the resources that are required for replication. However, the permissions should be reviewed in accordance to your security standards.
    In the architecture of this solution, there are two main places where you control access to the management of your secrets in Secrets Manager.
    Lambda execution IAM role: The IAM role assumed by the Lambda function during execution contains the appropriate permissions for secret replication. There is an additional safety measure, which explicitly denies any action to a resource that is not required for the replication. For example, the Lambda function only has permission to publish to the Amazon SNS topic that is created for the failed replications, and will explicitly deny a publish action to any other topic. Even if someone accidentally adds an allow to the policy for a different topic, the explicit deny will still block this action.
    AWS KMS key policy: When other services need to access the replicated secret in AWS Secrets Manager, they need permission to use the hybrid-aws-secrets-encryption-key AWS KMS key. You need to allow the principal these permissions through the AWS KMS key policy. Additionally, you can manage permissions to the AWS KMS key for the principal through an identity policy. For example, this is required when accessing AWS KMS keys across AWS accounts. See Permissions for AWS services in key policies and Specifying KMS keys in IAM policy statements in the AWS KMS Developer Guide.
    Options for customizing the sample solution
    The solution that was covered in this post provides an example for replication of secrets from HashiCorp Vault to AWS Secrets Manager using the pull model. This section contains additional customization options that you can consider when setting up the solution, or your own variation of it.

    Depending on the solution that you’re using, you might have access to different metadata attached to the secrets, which you can use to determine if a secret should be updated. For example, if you have access to data that represents a last_updated_datetime property, you could use this to infer whether or not a secret ought to be updated.
    It is a recommended practice to not use long-lived tokens wherever possible. In this sample, I used a static vault token to give the Lambda function access to the HashiCorp Vault. Depending on the solution that you’re using, you might be able to implement better authentication and authorization mechanisms. For example, HashiCorp Vault allows you to use IAM auth by using AWS IAM, rather than a static token.
    This post addressed the creation of secrets and updating of secrets, but for your production setup, you should also consider deletion of secrets. Depending on your requirements, you can choose to implement a strategy that works best for you to handle secrets in AWS Secrets Manager once the original secret in HashiCorp Vault has been deleted. In the pull model, you could consider removing a secret in AWS Secrets Manager if the corresponding secret in your external secrets manager is no longer present.
    In the sample setup, the same AWS KMS key is used to encrypt both the environment variables of the Lambda function, and the secrets in AWS Secrets Manager. You could choose to add an additional AWS KMS key (which would incur additional cost), to have two separate keys for these tasks. This would allow you to apply more granular permissions for the two keys in the corresponding KMS key policies or IAM identity policies that use the keys.

    Conclusion
    In this blog post, you’ve seen how you can approach replicating your secrets from an external secrets manager to AWS Secrets Manager. This post focused on a pull model, where the solution periodically fetched secrets from an external HashiCorp Vault and automatically created or updated the corresponding secret in AWS Secrets Manager. By using this model, you can now use your external secrets in your AWS Cloud applications or services that have an integration with AWS Secrets Manager.
    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager re:Post or contact AWS Support.
    Want more AWS Security news? Follow us on Twitter.

    Laurens Brinker
    Laurens is a Software Development Engineer working for AWS Security and is based in London. Previously, Laurens worked as a Security Solutions Architect at AWS, where he helped customers running their workloads securely in the AWS Cloud. Outside of work, Laurens enjoys cycling, a casual game of chess, and building open source projects.

    Powered by WPeMatico

  • AWS achieves ISO 20000-1:2018 certification for 109 services

    AWS achieves ISO 20000-1:2018 certification for 109 services

    We continue to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that AWS Regions and AWS Edge locations are now certified by the International Organization for Standardization (ISO) 20000-1:2018 standard. This certification demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers.
    Published by the International Organization for Standardization (ISO), ISO 20000-1:2018 helps organizations specify requirements for establishing, implementing, maintaining, and continually improving a Service Management System (SMS).
    AWS was evaluated by EY CertifyPoint, an independent third-party auditor. The Certificate of Compliance illustrating the AWS compliance status is available through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.
    As of this writing, 109 services offered globally are in scope of this certification. For up-to-date information, including when additional services are added, see the AWS ISO 20000-1:2018 certification webpage.
    AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Reach out to your AWS account team if you have questions or feedback about ISO 20000-1:2018 compliance.
    To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.
     If you have feedback about this post, submit comments in the Comments section below.
    Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

    Rodrigo Fiuza
    Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, Caribbean and Europe. Rodrigo has previously worked in risk management, security assurance, and technology audits for the past 12 years.

    Powered by WPeMatico

  • Reduce risk by implementing HttpOnly cookie authentication in Amazon API Gateway

    Reduce risk by implementing HttpOnly cookie authentication in Amazon API Gateway

    Some web applications need to protect their authentication tokens or session IDs from cross-site scripting (XSS). It’s an Open Web Application Security Project (OWASP) best practice for session management to store secrets in the browsers’ cookie store with the HttpOnly attribute enabled. When cookies have the HttpOnly attribute set, the browser will prevent client-side JavaScript code from accessing the value. This reduces the risk of secrets being compromised.
    In this blog post, you’ll learn how to store access tokens and authenticate with HttpOnly cookies in your own workloads when using Amazon API Gateway as the client-facing endpoint. The tutorial in this post will show you a solution to store OAuth2 access tokens in the browser cookie store, and verify user authentication through Amazon API Gateway. This post describes how to use Amazon Cognito to issue OAuth2 access tokens, but the solution is not limited to OAuth2. You can use other kinds of tokens or session IDs.
    The solution consists of two decoupled parts:

    OAuth2 flow
    Authentication check

    Note: This tutorial takes you through detailed step-by-step instructions to deploy an example solution. If you prefer to deploy the solution with a script, see the api-gw-http-only-cookie-auth GitHub repository.

    Prerequisites

    You should have an AWS account.
    You should have level 200-300 knowledge about the OAuth2 protocol.
    You should have the AWS Command Line Interface (AWS CLI) installed.
    You should have the AWS Toolkit for Visual Studio Code installed, so you can simply upload your code.
    You should have Node.js installed on your local machine.
    This solution uses the following services:

    Amazon Cognito
    Amazon API Gateway
    AWS Lambda

    No costs should incur when you deploy the application from this tutorial because the services you’re going to use are included in the AWS Free Tier. However, be aware that small charges may apply if you have other workloads running in your AWS account and exceed the free tier. Make sure to clean up your resources from this tutorial after deployment.
    Solution architecture
    This solution uses Amazon Cognito, Amazon API Gateway, and AWS Lambda to build a solution that persists OAuth2 access tokens in the browser cookie store. Figure 1 illustrates the solution architecture for the OAuth2 flow.

    Figure 1: OAuth2 flow solution architecture

    A user authenticates by using Amazon Cognito.
    Amazon Cognito has an OAuth2 redirect URI pointing to your API Gateway endpoint and invokes the integrated Lambda function oAuth2Callback.
    The oAuth2Callback Lambda function makes a request to the Amazon Cognito token endpoint with the OAuth2 authorization code to get the access token.
    The Lambda function returns a response with the Set-Cookie header, instructing the web browser to persist the access token as an HttpOnly cookie. The browser will automatically interpret the Set-Cookie header, because it’s a web standard. HttpOnly cookies can’t be accessed through JavaScript—they can only be set through the Set-Cookie header.

    After the OAuth2 flow, you are set up to issue and store access tokens. Next, you need to verify that users are authenticated before they are allowed to access your protected backend. Figure 2 illustrates how the authentication check is handled.

    Figure 2: Authentication check solution architecture

    A user requests a protected backend resource. The browser automatically attaches HttpOnly cookies to every request, as defined in the web standard.
    The Lambda function oAuth2Authorizer acts as the Lambda authorizer for HTTP APIs. It validates whether requests are authenticated. If requests include the proper access token in the request cookie header, then it allows the request.
    API Gateway only passes through requests that are authenticated.

    Amazon Cognito is not involved in the authentication check, because the Lambda function can validate the OAuth2 access tokens by using a JSON Web Token (JWT) validation check.
    1. Deploying the OAuth2 flow
    In this section, you’ll deploy the first part of the solution, which is the OAuth2 flow. The OAuth2 flow is responsible for issuing and persisting OAuth2 access tokens in the browser’s cookie store.
    1.1. Create a mock protected backend
    As shown in in Figure 2, you need to protect a backend. For the purposes of this post, you create a mock backend by creating a simple Lambda function with a default response.
    To create the Lambda function

    In the Lambda console, choose Create function.

    Note: Make sure to select your desired AWS Region.

    Choose Author from scratch as the option to create the function.
    In the Basic information section as shown in , enter or select the following values:

    For Function name, enter getProtectedResource.
    For Runtime, select Node.js 16.x.
    For Architecture, select arm64, because arm64 is designed to be faster and cheaper.

    Choose Create function.

    Figure 3: Configuring the getProtectedResource Lambda function

    The default Lambda function code returns a simple Hello from Lambda message, which is sufficient to demonstrate the concept of this solution.
    1.2. Create an HTTP API in Amazon API Gateway
    Next, you create an HTTP API by using API Gateway. Either an HTTP API or a REST API will work. In this example, choose HTTP API because it’s offered at a lower price point (for this tutorial you will stay within the free tier).
    To create the API Gateway API

    In the API Gateway console, under HTTP API, choose Build.
    On the Create and configure integrations page, as shown in Figure 4, choose Add integration, then enter or select the following values:

    Select Lambda.
    For Lambda function, select the getProtectedResource Lambda function that you created in the previous section.
    For API name, enter a name. In this example, I used MyApp.
    Choose Next.

    Figure 4: Configuring API Gateway integrations and API name

    On the Configure routes page, as shown in Figure 5, enter or select the following values:

    For Method, select GET.
    For Resource path, enter / (a single forward slash).
    For Integration target, select the getProtectedResource Lambda function.
    Choose Next.

    Figure 5: Configuring API Gateway routes

    On the Configure stages page, keep all the default options, and choose Next.
    On the Review and create page, choose Create.
    Note down the value of Invoke URL, as shown in Figure 6.

    Figure 6: Note down the invoke URL

    Now it’s time to test your API Gateway API. Paste the value of Invoke URL into your browser. You’ll see the following message from your Lambda function: Hello from Lambda.
    1.3. Use Amazon Cognito
    You’ll use Amazon Cognito user pools to create and maintain a user directory, and add sign-up and sign-in to your web application.
    To create an Amazon Cognito user pool

    In the Amazon Cognito console, choose Create user pool.
    On the Authentication providers page, as shown in Figure 7, for Cognito user pool sign-in options, select Email, then choose Next.

    Figure 7: Configuring authentication providers

    In the Multi-factor authentication pane of the Configure Security requirements page, as shown in Figure 8, choose your MFA enforcement. For this example, choose No MFA to make it simpler for you to test your solution. However, in production for data sensitive workloads you should choose Require MFA – Recommended. Choose Next.

    Figure 8: Configuring MFA

    On the Configure sign-up experience page, keep all the default options and choose Next.
    On the Configure message delivery page, as shown in Figure 9, choose your email provider. For this example, choose Send email with Cognito to make it simple to test your solution. In production workloads, you should choose Send email with Amazon SES – Recommended. Choose Next.

    Figure 9: Configuring email

    In the User pool name section of the Integrate your app page, as shown in Figure 10, enter or select the following values:

    For User pool name, enter a name. In this example, I used MyUserPool.

    Figure 10: Configuring user pool name

    In the Hosted authentication pages section, as shown in Figure 11, select Use the Cognito Hosted UI.

    Figure 11: Configuring hosted authentication pages

    In the Domain section, as shown in Figure 12, for Domain type, choose Use a Cognito domain. For Cognito domain, enter a domain name. Note that domains in Cognito must be unique. Make sure to enter a unique name, for example by appending random numbers at the end of your domain name. For this example, I used https://http-only-cookie-secured-app.

    Figure 12: Configuring an Amazon Cognito domain

    In the Initial app client section, as shown in Figure 13, enter or select the following values:

    For App type, keep the default setting Public client.
    For App client name, enter a friendly name. In this example, I used MyAppClient.
    For Client secret, keep the default setting Don’t generate a client secret.
    For Allowed callback URLs, enter /oauth2/callback, replacing with the invoke URL you noted down from API Gateway in the previous section.

    Figure 13: Configuring the initial app client

    Choose Next.

    Choose Create user pool.

    Next, you need to retrieve some Amazon Cognito information for later use.
    To note down Amazon Cognito information

    In the Amazon Cognito console, choose the user pool you created in the previous steps.
    Under User pool overview, make note of the User pool ID value.
    On the App integration tab, under Cognito Domain, make note of the Domain value.
    Under App client list, make note of the Client ID value.
    Under App client list, choose the app client name you created in the previous steps.
    Under Hosted UI, make note of the Allowed callback URLs value.

    Next, create the user that you will use in a later section of this post to run your test.
    To create a user

    In the Amazon Cognito console, choose the user pool you created in the previous steps.
    Under Users, choose Create user.
    For Email address, enter user@example.com. For this tutorial, you don’t need to send out actual emails, so the email address does not need to actually exist.
    Choose Mark email address as verified.
    For password, enter a password you can remember (or even better: use a password generator).
    Remember the email and password for later use.
    Choose Create user.

    1.4. Create the Lambda function oAuth2Callback
    Next, you create the Lambda function oAuth2Callback, which is responsible for issuing and persisting the OAuth2 access tokens.
    To create the Lambda function oAuth2Callback

    In the Lambda console, choose Create function.

    Note: Make sure to select your desired Region.

    For Function name, enter oAuth2Callback.
    For Runtime, select Node.js 16.x.
    For Architecture, select arm64.
    Choose Create function.

    After you create the Lambda function, you need to add the code. Create a new folder on your local machine and open it with your preferred integrated development environment (IDE). Add the package.json and index.js files, as shown in the following examples.
    package.json
    {
    “name”: “oAuth2Callback”,
    “version”: “0.0.1”,
    “dependencies”: {
    “axios”: “^0.27.2”,
    “qs”: “^6.11.0”
    }
    }
    In a terminal at the root of your created folder, run the following command.
    $ npm install
    In the index.js example code that follows, be sure to replace the placeholders with your values.
    index.js
    const qs = require(“qs”);
    const axios = require(“axios”).default;
    exports.handler = async function (event) {
    const code = event.queryStringParameters?.code;
    if (code == null) {
    return {
    statusCode: 400,
    body: “code query param required”,
    };
    }
    const data = {
    grant_type: “authorization_code”,
    client_id: “”,
    // The redirect has already happened, but you still need to pass the URI for validation, so a valid oAuth2 access token can be generated
    redirect_uri: encodeURI(“”),
    code: code,
    };
    // Every Cognito instance has its own token endpoints. For more information check the documentation: https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html
    const res = await axios.post(
    “/oauth2/token”,
    qs.stringify(data),
    {
    headers: {
    “Content-Type”: “application/x-www-form-urlencoded”,
    },
    }
    );
    return {
    statusCode: 302,
    // These headers are returned as part of the response to the browser.
    headers: {
    // The Location header tells the browser it should redirect to the root of the URL
    Location: “/”,
    // The Set-Cookie header tells the browser to persist the access token in the cookie store
    “Set-Cookie”: `accessToken=${res.data.access_token}; Secure; HttpOnly; SameSite=Lax; Path=/`,
    },
    };
    };
    Along with the HttpOnly attribute, you pass along two additional cookie attributes:

    Secure – Indicates that cookies are only sent by the browser to the server when a request is made with the https: scheme.
    SameSite – Controls whether or not a cookie is sent with cross-site requests, providing protection against cross-site request forgery attacks. You set the value to Lax because you want the cookie to be set when the user is forwarded from Amazon Cognito to your web application (which runs under a different URL).

    For more information, see Using HTTP cookies on the MDN Web Docs site.
    Afterwards, upload the code to the oAuth2Callback Lambda function as described in Upload a Lambda Function in the AWS Toolkit for VS Code User Guide.
    1.5. Configure an OAuth2 callback route in API Gateway
    Now, you configure API Gateway to use your new Lambda function through a Lambda proxy integration.
    To configure API Gateway to use your Lambda function

    In the API Gateway console, under APIs, choose your API name. For me, the name is MyApp.
    Under Develop, choose Routes.
    Choose Create.
    Enter or select the following values:

    For method, select GET.
    For path, enter /oauth2/callback.

    Choose Create.
    Choose GET under /oauth2/callback, and then choose Attach integration.
    Choose Create and attach an integration.

    For Integration type, choose Lambda function.
    For Lambda function, choose oAuth2Callback from the last step.

    Choose Create.

    Your route configuration in API Gateway should now look like Figure 14.

    Figure 14: Routes for API Gateway

    2. Testing the OAuth2 flow
    Now that you have the components in place, you can test your OAuth2 flow. You test the OAuth2 flow by invoking the login on your browser.
    To test the OAuth2 flow

    In the Amazon Cognito console, choose your user pool name. For me, the name is MyUserPool.
    Under the navigation tabs, choose App integration.
    Under App client list, choose your app client name. For me, the name is MyAppClient.
    Choose View Hosted UI.
    In the newly opened browser tab, open your developer tools, so you can inspect the network requests.
    Log in with the email address and password you set in the previous section. Change your password, if you’re asked to do so. You can also choose the same password as you set in the previous section.
    You should see your Hello from Lambda message.

    To test that the cookie was accurately set

    Check your browser network tab in the browser developer settings. You’ll see the /oauth2/callback request, as shown in Figure 15.

    Figure 15: Callback network request
    The response headers should include a set-cookie header, as you specified in your Lambda function. With the set-cookie header, your OAuth2 access token is set as an HttpOnly cookie in the browser, and access is prohibited from any client-side code.
    Alternatively, you can inspect the cookie in the browser cookie storage, as shown in Figure 16.

    Figure 16: Cookie storage

    If you want to retry the authentication, navigate in your browser to your Amazon Cognito domain that you chose in the previous section and clear all site data in the browser developer tools. Do the same with your API Gateway invoke URL. Now you can restart the test with a clean state.

    3. Deploying the authentication check
    In this section, you’ll deploy the second part of your application: the authentication check. The authentication check makes it so that only authenticated users can access your protected backend. The authentication check works with the HttpOnly cookie, which is stored in the user’s cookie store.
    3.1. Create the Lambda function oAuth2Authorizer
    This Lambda function checks that requests are authenticated.
    To create the Lambda function

    In the Lambda console, choose Create function.

    Note: Make sure to select your desired Region.

    For Function name, enter oAuth2Authorizer.
    For Runtime, select Node.js 16.x.
    For Architecture, select arm64.
    Choose Create function.

    After you create the Lambda function, you need to add the code. Create a new folder on your local machine and open it with your preferred IDE. Add the package.json and index.js files as shown in the following examples.
    package.json
    {
    “name”: “oAuth2Authorizer”,
    “version”: “0.0.1”,
    “dependencies”: {
    “aws-jwt-verify”: “^3.1.0”
    }
    }
    In a terminal at the root of your created folder, run the following command.
    $ npm install
    In the index.js example code, be sure to replace the placeholders with your values.
    index.js
    const { CognitoJwtVerifier } = require(“aws-jwt-verify”);
    function getAccessTokenFromCookies(cookiesArray) {
    // cookieStr contains the full cookie definition string: “accessToken=abc”
    for (const cookieStr of cookiesArray) {
    const cookieArr = cookieStr.split(“accessToken=”);
    // After splitting you should get an array with 2 entries: [“”, “abc”] – Or only 1 entry in case it was a different cookie string: [“test=test”]
    if (cookieArr[1] != null) {
    return cookieArr[1]; // Returning only the value of the access token without cookie name
    }
    }
    return null;
    }
    // Create the verifier outside the Lambda handler (= during cold start),
    // so the cache can be reused for subsequent invocations. Then, only during the
    // first invocation, will the verifier actually need to fetch the JWKS.
    const verifier = CognitoJwtVerifier.create({
    userPoolId: “”,
    tokenUse: “access”,
    clientId: “”,
    });
    exports.handler = async (event) => {
    if (event.cookies == null) {
    console.log(“No cookies found”);
    return {
    isAuthorized: false,
    };
    }
    // Cookies array looks something like this: [“accessToken=abc”, “otherCookie=Random Value”]
    const accessToken = getAccessTokenFromCookies(event.cookies);
    if (accessToken == null) {
    console.log(“Access token not found in cookies”);
    return {
    isAuthorized: false,
    };
    }
    try {
    await verifier.verify(accessToken);
    return {
    isAuthorized: true,
    };
    } catch (e) {
    console.error(e);
    return {
    isAuthorized: false,
    };
    }
    };
    After you add the package.json and index.js files, upload the code to the oAuth2Authorizer Lambda function as described in Upload a Lambda Function in the AWS Toolkit for VS Code User Guide.
    3.2. Configure the Lambda authorizer in API Gateway
    Next, you configure your authorizer Lambda function to protect your backend. This way you control access to your HTTP API.
    To configure the authorizer Lambda function

    In the API Gateway console, under APIs, choose your API name. For me, the name is MyApp.
    Under Develop, choose Routes.
    Under / (a single forward slash) GET, choose Attach authorization.
    Choose Create and attach an authorizer.
    Choose Lambda.
    Enter or select the following values:

    For Name, enter oAuth2Authorizer.
    For Lambda function, choose oAuth2Authorizer.
    Clear Authorizer caching. For this tutorial, you disable authorizer caching to make testing simpler. See the section Bonus: Enabling authorizer caching for more information about enabling caching to increase performance.
    Under Identity sources, choose Remove.

    Note: Identity sources are ignored for your Lambda authorizer. These are only used for caching.

    Choose Create and attach.

    Under Develop, choose Routes to inspect all routes.

    Now your API Gateway route /oauth2/callback should be configured as shown in Figure 17.

    Figure 17: API Gateway route configuration

    4. Testing the OAuth2 authorizer
    You did it! From your last test, you should still be authenticated. So, if you open the API Gateway Invoke URL in your browser, you’ll be greeted from your protected backend.
    In case you are not authenticated anymore, you’ll have to follow the steps again from the section Testing the OAuth2 flow to authenticate.
    When you inspect the HTTP request that your browser makes in the developer tools as shown in Figure 18, you can see that authentication works because the HttpOnly cookie is automatically attached to every request.

    Figure 18: Browser requests include HttpOnly cookies

    To verify that your authorizer Lambda function works correctly, paste the same Invoke URL you noted previously in an incognito window. Incognito windows do not share the cookie store with your browser session, so you see a {“message”:”Forbidden”} error message with HTTP response code 403 – Forbidden.
    Cleanup
    Delete all unwanted resources to avoid incurring costs.
    To delete the Amazon Cognito domain and user pool

    In the Amazon Cognito console, choose your user pool name. For me, the name is MyUserPool.
    Under the navigation tabs, choose App integration.
    Under Domain, choose Actions, then choose Delete Cognito domain.
    Confirm by entering your custom Amazon Cognito domain, and choose Delete.
    Choose Delete user pool.
    Confirm by entering your user pool name (in my case, MyUserPool), and then choose Delete.

    To delete your API Gateway resource

    In the API Gateway console, select your API name. For me, the name is MyApp.
    Under Actions, choose Delete and confirm your deletion.

    To delete the AWS Lambda functions

    In the Lambda console, select all three of the Lambda functions you created.
    Under Actions, choose Delete and confirm your deletion.

    Bonus: Enabling authorizer caching
    As mentioned earlier, you can enable authorizer caching to help improve your performance. When caching is enabled for an authorizer, API Gateway uses the authorizer’s identity sources as the cache key. If a client specifies the same parameters in identity sources within the configured Time to Live (TTL), then API Gateway uses the cached authorizer result, rather than invoking your Lambda function.
    To enable caching, your authorizer must have at least one identity source. To cache by the cookie request header, you specify $request.header.cookie as the identity source. Be aware that caching will be affected if you pass along additional HttpOnly cookies apart from the access token.
    For more information, see Working with AWS Lambda authorizers for HTTP APIs in the Amazon API Gateway Developer Guide.
    Conclusion
    In this blog post, you learned how to implement authentication by using HttpOnly cookies. You used Amazon API Gateway and AWS Lambda to persist and validate the HttpOnly cookies, and you used Amazon Cognito to issue OAuth2 access tokens. If you want to try an automated deployment of this solution with a script, see the api-gw-http-only-cookie-auth GitHub repository.
    The application of this solution to protect your secrets from potential cross-site scripting (XSS) attacks is not limited to OAuth2. You can protect other kinds of tokens, sessions, or tracking IDs with HttpOnly cookies.
    In this solution, you used NodeJS for your Lambda functions to implement authentication. But HttpOnly cookies are widely supported by many programing frameworks. You can find more implementation options on the OWASP Secure Cookie Attribute page.
    Although this blog post gives you a tutorial on how to implement HttpOnly cookie authentication in API Gateway, it may not meet all your security and functional requirements. Make sure to check your business requirements and talk to your stakeholders before you adopt techniques from this blog post.
    Furthermore, it’s a good idea to continuously test your web application, so that cookies are only set with your approved security attributes. For more information, see the OWASP Testing for Cookies Attributes page.
     If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon API Gateway re:Post or contact AWS Support.
    Want more AWS Security news? Follow us on Twitter.

    Marc Borntraeger
    Marc is a Solutions Architect in healthcare, based in Zurich, Switzerland. He helps security-sensitive customers such as hospitals to re-innovate themselves with AWS.

    Powered by WPeMatico

  • Visualize AWS WAF logs with an Amazon CloudWatch dashboard

    Visualize AWS WAF logs with an Amazon CloudWatch dashboard

    AWS WAF is a web application firewall service that helps you protect your applications from common exploits that could affect your application’s availability and your security posture. One of the most useful ways to detect and respond to malicious web activity is to collect and analyze AWS WAF logs. You can perform this task conveniently by sending your AWS WAF logs to Amazon CloudWatch Logs and visualizing them through an Amazon CloudWatch dashboard.
    In this blog post, I’ll show you how to use Amazon CloudWatch to monitor and analyze AWS WAF activity using the options in CloudWatch metrics, Contributor Insights, and Logs Insights. I’ll also walk you through how to deploy this solution in your own AWS account by using an AWS CloudFormation template.
    Prerequisites
    This blog post builds on the concepts introduced in the blog post Analyzing AWS WAF Logs in Amazon CloudWatch Logs. There we introduced how to natively set up AWS WAF logging to Amazon CloudWatch logs, and discussed the basic options that are available for visualizing and analyzing the data provided in the logs.
    The only AWS services that you need to turn on for this solution are Amazon CloudWatch and AWS WAF. The solution assumes that you’ve previously set up AWS WAF log delivery to Amazon CloudWatch Logs. If you have not done so, follow the instructions for AWS WAF logging destinations – CloudWatch Logs.
    You will need to provide the following parameters for the CloudFormation template:

    CloudWatch log group name for the AWS WAF logs
    The AWS Region for the logs
    The name of the AWS WAF web access control list (web ACL)

    Solution overview
    The architecture of the solution is outlined in Figure 1. The solution takes advantage of the native integration available between AWS WAF and CloudWatch, which simplifies the setup and management of this solution.

    Figure 1: Solution architecture

    In the solution, the logs are sent to CloudWatch (when you enable log delivery). From there, they’re ready to be consumed by all the different service options that CloudWatch offers, including the ones that we’ll use in this solution: CloudWatch Logs Insights and Contributor Insights.
    Deploy the solution
    Choose the following Launch stack button to launch the CloudFormation stack in your account.

    You’ll be redirected to the CloudFormation service in the AWS US East (N. Virginia) Region, which is the default Region to deploy this solution, although this can vary depending on where your web ACL is located. You can change the Region as preferred. The template will spin up multiple cloud resources, such as the following:

    CloudWatch Logs Insights queries
    CloudWatch Contributor Insights visuals
    CloudWatch dashboard

    The solution is quickly deployed to your account and is ready to use in less than 30 minutes. You can use the solution when the status of the stack changes to CREATE_COMPLETE.
    As a measure to control costs, you can also choose whether to create the Contributor Insights rules and enable them by default. For more information on costs, see the Cost considerations section later in this post.
    Explore and validate the dashboard
    When the CloudFormation stack is complete, you can choose the Output tab in the CloudFormation console and then choose the dashboard link. This will take you to the CloudWatch service in the AWS Management Console. The dashboard time range presents information for the last hour of activity by default, and can go up to one week, but keep in mind that Contributor Insights has a maximum time range of 24 hours. You can also select a different dashboard refresh interval from 10 seconds up to 15 minutes.
    The dashboard provides the following information from CloudWatch.

    Rule name
    Description

    WAF_top_terminating_rules
    This rule shows the top rules where the requests are being terminated by AWS WAF. This can help you understand the main cause of blocked requests.

    WAF_top_ips
    This rule shows the top source IPs for requests. This can help you understand if the traffic and activity that you see is spread across many IPs or concentrated in a small group of IPs.

    WAF_top_countries
    This rule shows the main source countries for the IPs in the requests. This can help you visualize where the traffic is originating.

    WAF_top_user_agents
    This rule shows the main user agents that are being used to generate the requests. This will help you isolate problematic devices or identify potential false positives.

    WAF_top_uri
    This rule shows the main URIs in the requests that are being evaluated. This can help you identify if one specific path is the target of activity.

    WAF_top_http
    This rule shows the HTTP methods used for the requests examined by AWS WAF. This can help you understand the pattern of behavior of the traffic.

    WAF_top_referrer_hosts
    This rule shows the main referrer from which requests are being sent. This can help you identify incorrect or suspicious origins of requests based on the known application flow.

    WAF_top_rate_rules
    This rule shows the main rate rules being applied to traffic. It helps understand volumetric activity identified by AWS WAF.

    WAF_top_labels
    This rule shows the top labels found in logs. This can help you visualize the main rules that are matching on the requests evaluated by AWS WAF.

    The dashboard also provides the following information from the default CloudWatch metrics sent by AWS WAF.

    Rule name
    Description

    AllowedvsBlockedRequests
    This metric shows the number of all blocked and allowed requests. This can help you understand the number of requests that AWS WAF is actively blocking.

    Bot Requests vs non-Bot requests
    This visual shows the number of requests identified as bots versus non-bots (if you’re using AWS WAF Bot Control).

    All Requests
    This metric shows the number of all requests, separated by bot and non-bot origin. This can help you understand all requests that AWS WAF is evaluating.

    CountedRequests
    This metric shows the number of all counted requests. This can help you understand the requests that are matching a rule but not being blocked, and aid the decision of a configuration change during the testing phase.

    CaptchaRequests
    This metric shows requests that go through the CAPTCHA rule.

    Figure 2 shows an example of how the CloudWatch dashboard displays the data within this solution. You can rearrange and customize the elements within the dashboard as needed.

    Figure 2: Example dashboard

    You can review each of the queries and rules deployed with this solution. You can also customize these baseline queries and rules to provide more detailed information or to add custom queries and rules to the solution code. For more information on how to build queries and use CloudWatch Logs and Contributor Insights, see the CloudWatch documentation.
    Use the dashboard for monitoring
    After you’ve set up the dashboard, you can monitor the activity of the sites that are protected by AWS WAF. If suspicious activity is reported, you can use the visuals to understand the traffic in more detail, and drive incident response actions as needed.
    Let’s consider an example of how to use your new dashboard and its data to drive security operations decisions. Suppose that you have a website that sells custom clothing at a bargain price. It has a sign-up link to receive offers, and you’re getting reports of unusual activity by the application team. By looking at the metrics for the web ACL that protects the site, you can see the main country for source traffic and the contributing URIs, as shown in Figure 3. You can also see that most of the activity is being detected by rules that you have in place, so you can set the rules to block traffic, or if they are already blocking, you can just monitor the activity.

    Figure 3: Metrics on website activity

    You can use the same visuals to decide whether an AWS WAF rule with high activity can be changed to autoblock suspicious web traffic without affecting valid customer traffic. By looking at the top terminating rules and cross-referencing information, such as source IPs, user agents, top URIs, and other request identifiers, you can understand the traffic pattern and activity of different applications and endpoints. From here, you can investigate further by using specific queries with CloudWatch Logs Insights.
    Operational and security management with CloudWatch Logs Insights
    You can use CloudWatch Logs Insights to interactively search and analyze log data in Amazon CloudWatch Logs using advanced queries to effectively investigate operational issues and security incidents.
    Examine a bot reported as a false positive
    You can use CloudWatch Logs Insights to identify requests that have specific labels to understand where the traffic is originating from based on source IP address and other essential event details. A simple example is investigating requests flagged as potential false positives.
    Imagine that you have a reported false positive request that was flagged as a non-browser by AWS WAF Bot Control. You can run the non-browser user agent query that was created by the provided template on CloudWatch Logs Insights, as shown in the following example, and then verify the source IPs for the top hits for this rule group. Or you can look for a specific request that has been flagged as a false positive, in order to review the details and make adjustments as needed.

    fields @timestamp, httpRequest.clientIp
    | filter @message like “awswaf:managed:aws:botcontrol:signal:non_browser_user_agent”
    | parse @message “”labels”:[*]”as Labels
    | stats count(*) as requestCount by httpRequest.clientIP
    | display @timestamp,httpRequest.clientIp, httpRequest.uri,Labels
    | sort requestCount desc
    | limit 10

    The non-browser user agent query also allows you confirm whether this request has other rule hits that were in count mode and were non-terminating; you can do this by examining the labels. If there are multiple rules matching the requests, that can be an indicator of suspicious activity.
    If you have a CAPTCHA challenge configured on the endpoint, you can also look at CAPTCHA responses. The CaptchaTokenqueryDefinition query provided in this solution uses a variation of the preceding format, and can display the main IPs from which bad tokens are being sent. An example query is shown following, along with the query results in Figure 4. If you have signals from non-browser user agents and CAPTCHA tokens missing, then that is a strong indicator of suspicious activity.

    fields @timestamp, httpRequest.clientIp
    | filter captchaResponse.failureReason = “TOKEN_MISSING”
    | stats count(*) as requestCount by httpRequest.clientIp, httpRequest.country
    | sort requestCount desc
    | limit 10

    Figure 4: Main IP addresses and number of counts for CAPTCHA responses

    This information can provide an indication of the main source of activity. You can then use other visuals, like top user agents or top referrers, to provide more context to the information and inform further actions, such as adding new rules to the AWS WAF configuration.
    You can adapt the queries provided in the sample solution to other use cases by using the fields provided in the left-hand pane of CloudWatch Logs Insights.
    Cost considerations
    Configuring AWS WAF to send logs to Amazon CloudWatch logs doesn’t have an additional cost. The cost incurred is for the use of the CloudWatch features and services, such as log storage and retention, Contributor Insights rules enabled, Logs Insights queries run, matched log events, and CloudWatch dashboards. For detailed information on the pricing of these features, see the CloudWatch Logs pricing information. You can also get an estimate of potential costs by using the AWS pricing calculator for CloudWatch.
    One way to help offset the cost of CloudWatch features and services is to restrict the use of the dashboard and enforce a log retention policy for AWS WAF that makes it cost effective. If you use the queries and monitoring only as-needed, this can also help reduce costs. By limiting the running of queries and the matched log events for the Contributor Insights rules, you can enable the rules only when you need them. AWS WAF also provides the option to filter the logs that are sent when logging is enabled. For more information, see AWS WAF log filtering.
    Conclusion
    In this post, you learned how to use a pre-built CloudWatch dashboard to monitor AWS WAF activity by using metrics and Contributor Insights rules. The dashboard can help you identify traffic patterns and activity, and you can use the sample Logs Insights queries to explore the log information in more detail and examine false positives and suspicious activity, for rule tuning.
    For more information on AWS WAF and the features mentioned in this post, see the AWS WAF documentation.
    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on AWS WAF re:Post.
    Want more AWS Security news? Follow us on Twitter.

    Diana Alvarado
    Diana is Sr security solutions architect at AWS. She is passionate about helping customers solve difficult cloud challenges, she has a soft spot for all things logs.

    Powered by WPeMatico

  • How to run AWS CloudHSM workloads in container environments

    How to run AWS CloudHSM workloads in container environments

    January 25, 2023: We updated this post to reflect the fact that CloudHSM SDK3 does not support serverless environments and we strongly recommend deploying SDK5.

    AWS CloudHSM provides hardware security modules (HSMs) in the AWS Cloud. With CloudHSM, you can generate and use your own encryption keys in the AWS Cloud, and manage your keys by using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.
    CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (AWS KMS), AWS Secrets Manager, and AWS Private Certificate Authority (AWS Private CA). AWS KMS, Secrets Manager, and AWS Private CA are fully managed services that are convenient to use and integrate. You’ll generally use CloudHSM only if your workload requires single-tenant HSMs under your own control, or if you need cryptographic algorithms or interfaces that aren’t available in the fully managed alternatives.
    CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), OpenSSL Dynamic Engine, or Microsoft Cryptography API: Next Generation (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to HSMs in your cluster.
    In this blog post, I’ll show you how to use Docker to develop, deploy, and run applications by using the CloudHSM SDK, and how to manage and orchestrate workloads by using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Kubernetes Service (Amazon EKS), and Jenkins.
    Solution overview
    This solution demonstrates how to create a Docker container that uses the CloudHSM JCE SDK to generate a key and use it to encrypt and decrypt data.

    Note: In this example, you must manually enter the crypto user (CU) credentials as environment variables when you run the container. For production workloads, you’ll need to consider how to secure and automate the handling and distribution of these credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials. For more information on securing credentials, see AWS Secrets Manager.

    Figure 1 shows the solution architecture. The Java application, running in a Docker container, integrates with JCE and communicates with CloudHSM instances in a CloudHSM cluster through HSM elastic network interfaces (ENIs). The Docker container runs in an EC2 instance, and access to the HSM ENIs is controlled with a security group.

    Figure 1: Architecture diagram

    Prerequisites
    To implement this solution, you need to have working knowledge of the following items:

    CloudHSM
    Docker 20.10.17 – used at the time of this post
    Java 8 or Java 11 – supported at the time of this post
    Maven 3.05 – used at the time of this post

    Here’s what you’ll need to follow along with my example:

    An active CloudHSM cluster with at least one active HSM instance. You can follow the CloudHSM getting started guide to create, initialize, and activate a CloudHSM cluster.

    Note: For a production cluster, you should have at least two active HSM instances spread across Availability Zones in the Region.

    An Amazon Linux 2 EC2 instance in the same virtual private cloud (VPC) in which you created your CloudHSM cluster. The Amazon Elastic Compute Cloud (Amazon EC2) instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control network access to the HSMs. To learn about attaching security groups to allow EC2 instances to connect to your HSMs, see Create a cluster in the AWS CloudHSM User Guide.
    A CloudHSM crypto user (CU) account. You can create a CU by following the steps in the topic Managing HSM users in AWS CloudHSM in the AWS CloudHSM User Guide.

    Solution details
    In this section, I’ll walk you through how to download, configure, compile, and run a solution in Docker.
    To set up Docker and run the application that encrypts and decrypts data with a key in AWS CloudHSM

    On your Amazon Linux EC2 instance, install Docker by running the following command. # sudo yum -y install docker
    Start the docker service. # sudo service docker start
    Create a new directory and move to it. In my example, I use a directory named cloudhsm_container. You’ll use the new directory to configure the Docker image. # mkdir cloudhsm_container # cd cloudhsm_container
    Copy the CloudHSM cluster’s trust anchor certificate (customerCA.crt) to the directory that you just created. You can find the trust anchor certificate on a working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. The certificate is created during initialization of the CloudHSM cluster and is required to connect to the CloudHSM cluster. This enables our application to validate that the certificate presented by the CloudHSM cluster was signed by our trust anchor certificate.
    In your new directory (cloudhsm_container), create a new file with the name run_sample.sh that includes the following contents. The script runs the Java class that is used to generate an Advanced Encryption Standard (AES) key to encrypt and decrypt your data.

    #! /bin/bash

    # start application
    echo -e “n* Entering AES GCM encrypt/decrypt sample in Docker … n”

    java -ea -jar target/assembly/aesgcm-runner.jar -method environment

    echo -e “n* Exiting AES GCM encrypt/decrypt sample in Docker … n”

    In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:

    The CloudHSM client package.
    The CloudHSM Java JCE package.
    OpenJDK 1.8 (Java 8). This is needed to compile and run the Java classes and JAR files.
    Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    The AWS CloudHSM Java JCE samples that will be downloaded and built as part of the solution.

    Cut and paste the following contents into Dockerfile.

    Note: You will need to customize your Dockerfile, as follows:

    Make sure to specify the SDK version to replace the one specified in the pom.xml file in the sample code. As of the writing of this post, the most current version is 5.7.0. To find the SDK version, follow the steps in the topic Check your client SDK version. For more information, see the Building section in the README file for the Cloud HSM JCE examples.
    Make sure to update the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.

    # Use the amazon linux image
    FROM amazonlinux:2

    # Pass HSM IP address as a build argument
    ARG HSM_IP

    # Install CloudHSM client
    RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-jce-latest.el7.x86_64.rpm

    # Install Java, Maven, wget, unzip and ncurses-compat-libs
    RUN yum install -y java maven wget unzip ncurses-compat-libs

    # Create a work dir
    WORKDIR /app

    # Download sample code
    RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/refs/heads/sdk5.zip

    # unzip sample code
    RUN unzip sdk5.zip

    # Change to the create directory
    WORKDIR aws-cloudhsm-jce-examples-sdk5

    # Build JAR files using the installed CloudHSM JCE Provider version
    RUN export CLOUDHSM_CLIENT_VERSION=`rpm -qi cloudhsm-jce | awk -F’: ‘ ‘/Version/ {print $2}’`
    && mvn validate -DcloudhsmVersion=$CLOUDHSM_CLIENT_VERSION
    && mvn clean package -DcloudhsmVersion=$CLOUDHSM_CLIENT_VERSION

    # Configure cloudhsm-client
    COPY customerCA.crt /opt/cloudhsm/etc/
    RUN /opt/cloudhsm/bin/configure-jce -a $HSM_IP

    # Copy the run_sample.sh script
    COPY run_sample.sh .

    # Run the script
    CMD [“bash”,”run_sample.sh”]

    Now you’re ready to build the Docker image. Run the following command, with the name jce_sample. This command will let you use the Dockerfile that you created in step 6 to create the image. # sudo docker build –build-arg HSM_IP=”” -t jce_sample .
    To run a Docker container from the Docker image that you just created, run the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, see Providing credentials to the JCE provider in the CloudHSM User Guide). # sudo docker run –env HSM_USER= –env HSM_PASSWORD= jce_sample If successful, the output should look like this:

    * Entering AES GCM encrypt/decrypt sample in Docker …

    737F92D1B7346267D329C16E
    Successful decryption

    * Exiting AES GCM encrypt/decrypt sample in Docker …

    Conclusion
    This solution provides an example of how to run CloudHSM client workloads in Docker containers. You can use the solution as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to CloudHSM without compromising the flexibility that Docker provides for developing, deploying, and running applications.
    If you have comments about this post, submit them in the Comments section below.
    Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

    Derek Tumulak
    Derek joined AWS in May 2021 as a Principal Product Manager. He is a data protection and cybersecurity expert who is enthusiastic about assisting customers with a wide range of sophisticated use cases.

    Powered by WPeMatico

  • United Arab Emirates IAR compliance assessment report is now available with 58 services in scope

    United Arab Emirates IAR compliance assessment report is now available with 58 services in scope

    Amazon Web Services (AWS) is pleased to announce the publication of our compliance assessment report on the Information Assurance Regulation (IAR) established by the Telecommunications and Digital Government Regulatory Authority (TDRA) of the United Arab Emirates. The report covers the AWS Middle East (UAE) Region, with 58 services in scope of the assessment.
    The IAR provides management and technical information security controls to establish, implement, maintain, and continuously improve information assurance. AWS alignment with IAR requirements demonstrates our ongoing commitment to adhere to the heightened expectations for cloud service providers. As such, IAR-regulated customers can use AWS services with confidence.
    Independent third-party auditors from BDO evaluated AWS for the period of November 1, 2021, to October 31, 2022. The assessment report illustrating the status of AWS compliance is available through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.
    For up-to-date information, including when additional services are added, see AWS Services in Scope by Compliance Program and choose IAR.
    AWS strives to continuously bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. If you have questions or feedback about IAR compliance, reach out to your AWS account team.
    To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.
     If you have feedback about this post, submit comments in the Comments section below.
    Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

    Ioana Mecu
    Ioana is a Security Audit Program Manager at AWS based in Madrid, Spain. She leads security audits, attestations, and certification programs across Europe and the Middle East. Ioana has previously worked in risk management, security assurance, and technology audits in the financial sector industry for the past 15 years.

    Gokhan Akyuz
    Gokhan is a Security Audit Program Manager at AWS based in Amsterdam, Netherlands. He leads security audits, attestations, and certification programs across Europe and the Middle East. Gokhan has more than 15 years of experience in IT and cybersecurity audits and controls implementation in a wide range of industries.

    Powered by WPeMatico