104 stories
·
1 follower

How to catch GitHub Actions workflow injections before attackers do

2 Shares

You already know that security is important to keep in mind when creating code and maintaining projects. Odds are, you also know that it’s much easier to think about security from the ground up rather than trying to squeeze it in at the end of a project.

But did you know that GitHub Actions injections are one of the most common vulnerabilities in projects stored in GitHub repositories? Thankfully, this is a relatively easy vulnerability to address, and GitHub has some tools to make it even easier.

A bar chart detailing the most common vulnerabilities found by CodeQL in 2024. In order from most to least, they are: injection, broken access control, insecure design, cryptographic failures, identification and authentication failures, security misconfigurations, software and data integrity failures, security logging and monitoring failures, server side request forgery, and vulnerable and outdated components.
From the 2024 Octoverse report detailing the most common types of OWASP-classified vulnerabilities identified by CodeQL in 2024. Our latest data shows a similar trend, highlighting the continued risks of injection attacks despite continued warnings for several decades.

Embracing a security mindset

The truth is that security is not something that is ever “done.” It’s a continuous process, one that you need to keep focusing on to help keep your code safe and secure. While automated tools are a huge help, they’re not an all-in-one, fire-and-forget solution.

This is why it’s important to understand the causes behind security vulnerabilities as well as how to address them. No tool will be 100% effective, but by increasing your understanding and deepening your knowledge, you will be better able to respond to threats. 

With that in mind, let’s talk about one of the most common vulnerabilities found in GitHub repositories.

Explaining actions workflow injections

So what exactly is a GitHub Actions workflow injection? This is when a malicious attacker is able to submit a command that is run by a workflow in your repository. This can happen when an attacker controls the data, such as when they create an issue title or a branch name, and you execute that untrusted input. For example, you might execute it in the run portion of your workflow.

One of the most common causes of this is with the ${{}} syntax in your code. In the preprocessing step, this syntax will automatically expand. That expansion may alter your code by inserting new commands. Then, when the system executes the code, these malicious commands are executed too.

Consider the following workflow as an example:

- name: print title
  run: echo "${{ github.event.issue.title }}"

Let’s assume that this workflow is triggered whenever a user creates an issue. Then an attacker can create an issue with malicious code in the title, and the code will be executed when this workflow runs. The attacker only needs to do a small amount of trickery such as adding backtick characters to the title: touch pwned.txt. Furthermore, this code will run using the permissions granted to the workflow, permissions the attacker is otherwise unlikely to have.

This is the root of the actions workflow injection. The biggest issues with actions workflow injections are awareness that this is a problem and finding all the instances that could lead to this vulnerability.

How to proactively protect your code

As stated earlier, it’s easier to prevent a vulnerability from appearing than it is to catch it after the fact. To that end, there are a few things that you should keep in mind while writing your code to help protect yourself from actions workflow injections.

While these are valuable tips, remember that even if you follow all of these guidelines, it doesn’t guarantee that you’re completely protected.

Use environment variables

Remember that the actions workflow injections happen as a result of expanding what should be treated as untrusted input. When it is inserted into your workflow, if it contains malicious code, it changes the intended behavior. Then when the workflow triggers and executes, the attacker’s code runs.
One solution is to avoid using the ${{}} syntax in workflow sections like run. Instead, expand the untrusted data into an environment variable and then use the environment variable when you are running the workflow. If you consider our example above, this would change to the following.

- name: print title
  env:
    TITLE: ${{ github.event.issue.title }}
  run: echo "$TITLE"

This won’t make the input trusted, but it will help to protect you from some of the ways attackers could take advantage of this vulnerability. We encourage you to do this, but still remember that this data is untrusted and could be a potential risk.

The principle of least privilege is your best friend

When an actions workflow injection triggers, it runs with the permissions granted to the workflow. You can specify what permissions workflows have by setting the permissions for the workflow’s GITHUB_TOKEN. For this reason, it’s important to make sure that your workflows are only running with the lowest privilege levels they need in order to perform duties. Otherwise, you might be giving an attacker permissions you didn’t intend if they manage to inject their code into your workflow.

Be cautious with pull_request_target

The impact is usually much more devastating when injection happens in a workflow that is triggered on pull_request_target than on pull_request. There is a significant difference between the pull_request and pull_request_target workflow triggers.

The pull_request workflow trigger prevents write permissions and secrets access on the target repository by default when it’s triggered from a fork. Note that when the workflow is triggered from a branch in the same repository, it has access to secrets and potentially has write permissions. It does this in order to help prevent unauthorized access and protect your repository.

By contrast, the pull_request_target workflow trigger gives the workflow writer the ability to release some of the restrictions. While this is important for some scenarios, it does mean that by using pull_request_target instead of pull_request, you are potentially putting your repository at a greater risk.

This means you should be using the pull_request trigger unless you have a very specific need to use pull_request_target. And if you are using the latter, you want to take extra care with the workflow given the additional permissions.

The problem’s not just on main

It’s not uncommon to create several branches while developing your code, often for various features or bug fixes. This is a normal part of the software development cycle. And sometimes we’re not the best at remembering to close and delete those branches after merging or after we’ve finished working with them. Unfortunately, these branches are still a potential vulnerability if you’re using the pull_request_target trigger.

An attacker can target a workflow that runs on a pull request in a branch, and still take advantage of this exploit. This means that you can’t just assume your repository is safe because the workflows against your main branch are secure. You need to review all of the branches that are publicly visible in your repository.

What CodeQL brings to the table

CodeQL is GitHub’s code analysis tool that provides automated security checks against your code. The specific feature of CodeQL that is most relevant here is the code scanning feature, which can provide feedback on your code and help identify potential security vulnerabilities. We recently made the ability to scan GitHub Actions workflow files generally available, and you can use this feature to look for several types of vulnerabilities, such as potential actions workflow injection risks. 

One of the reasons CodeQL is so good at finding where untrusted data might be used is because of taint tracking. We added taint tracking to CodeQL for actions late last year. With taint tracking, CodeQL tracks where untrusted data flows through your code and identifies potential risks that might not be as obvious as the previous examples.

Enabling CodeQL to scan your actions workflows is as easy as enabling CodeQL code scanning with the default setup, which automatically includes analyzing actions workflows and will run on any protected branch. You can then check for the code scanning results to identify potential risks and start fixing them. 

If you’re already using the advanced setup for CodeQL, you can add support for scanning your actions workflows by adding the actions language to the target languages. These scans will be performed going forward and help to identify these vulnerabilities.

While we won’t get into it in this blog, it’s important to know that CodeQL code scanning runs several queries—it’s not just good at finding actions workflow injections. We encourage you to give it a try and see what it can find. 

While CodeQL is a very effective tool—and it is really good at finding this specific vulnerability—it’s still not going to be 100% effective. Remember that no tool is perfect, and you should focus on keeping a security mindset and taking a critical idea to your own code. By keeping this in the forefront of your thoughts, you will be able to develop more secure code and help prevent these vulnerabilities from ever appearing in the first place. 

Future steps

Actions workflow injections are known to be one of the most prevalent vulnerabilities in repositories available on GitHub. However, they are relatively easy to address. The biggest issues with eliminating this vulnerability are simply being aware that they’re a problem and discovering the possible weak spots in your code.

Now that you’re aware of the issue, and have CodeQL on your side as a useful tool, you should be able to start looking for and fixing these vulnerabilities in your own code. And if you keep the proactive measures in mind, you’ll be in a better position to prevent them from occurring in future code you write.

If you’d like to learn more about actions workflow injections, we previously published a four-part series about keeping your actions workflows secure. The second part is specifically about actions workflow injections, but we encourage you to give the entire series a read.

Need some help searching through your code to look for potential vulnerabilities? Set up code scanning in your project today.

The post How to catch GitHub Actions workflow injections before attackers do appeared first on The GitHub Blog.

Read the whole story
jhunorss
5 days ago
reply
Share this story
Delete

CCC fordert: Keine verschlossenen Türen im Digitalausschuss!

1 Share
Der Chaos Computer Club (CCC) fordert zusammen mit mehr als zwanzig Organisationen aus Zivilgesellschaft und Wissenschaft, dass der Digitalausschuss des Deutschen Bundestags von seinen Plänen abrückt, fortan im Geheimen zu tagen. Solche Sitzungen unter Ausschluss der Öffentlichkeit wären schlicht reaktionär, weil sie einen erheblichen Rückschritt bei Transparenz und Partizipation darstellen.
Read the whole story
jhunorss
5 days ago
reply
Share this story
Delete

Blackbox Palantir

1 Share
Die Gesellschaft für Freiheitsrechte hat heute mit Unterstützung des Chaos Computer Clubs Verfassungsbeschwerde gegen die automatisierte polizeiliche Datenanalyse in Bayern erhoben.
Read the whole story
jhunorss
5 days ago
reply
Share this story
Delete

Benchmarking and profiling Java with JMH

1 Share

Table of Contents
Introduction: Why JMH?DependenciesCreating your first benchmarkBenchmark modesState management

Understanding JMH outputPrevent dead code optimizationsConstant foldingUsing async profiler with JMHBonus: Linux toolsConclusion


Introduction: Why JMH?

Performance matters in Java applications, but measuring it accurately is harder than you might think. I've seen countless developers try to measure performance by wrapping code in System.currentTimeMillis() calls or using simple timing loops, only to get misleading results due to JVM optimizations, garbage collection, or just mistakes during measurement.

The JVM is incredibly good at optimizing code, sometimes so good that it optimizes away the very code you're trying to benchmark. Dead code elimination, constant folding, and just-in-time compilation can all skew your measurements in ways that don't reflect real-world performance.

That's where JMH (Java Microbenchmark Harness) comes in. In this post, I'll walk you through everything you need to know to start benchmarking your Java code, from basic setup to advanced profiling techniques that can help you identify performance bottlenecks.

Dependencies

For this post, we will use the following dependencies:

<dependency>
  <groupId>org.openjdk.jmh</groupId>
  <artifactId>jmh-core</artifactId>
  <version>1.37</version>
</dependency>
<dependency>
  <groupId>org.openjdk.jmh</groupId>
  <artifactId>jmh-generator-annprocess</artifactId>
  <version>1.37</version>
</dependency>

These dependencies are needed to run the benchmarks and to use the annotations. You can find the latest version of jmh-core here. The latest version of jmh-generator-annprocess can be found here

To use the async profiler, you need to download the async profiler from here. Add the async profiler to the classpath. If you are using Linux you can also copy the async profiler to one of the following directories: /usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib.

Creating your first benchmark

The easiest way to get started is to create a new class and give it a main method to start the benchmark. In the following example, you can see one way of doing this using the OptionsBuilder. It lets you configure everything from which benchmarks to run to how many iterations to perform.

public static void main(String[] args) throws RunnerException {
    Options opt = new OptionsBuilder()
            .include(Main.class.getSimpleName())
            .build();

    new Runner(opt).run();
}

In the previous example, you can see that we are using the OptionsBuilder to create the options. The OptionsBuilder has a lot of methods to configure the benchmark. Like if you want to enable garbage collection, how many threads you want to use, or if you want to use the async profiler and many more. For this example, we use include to specify which class to benchmark I want to run. In this example, we are running the Main class.

With that out of the way, we can start writing our first benchmark. Using annotations creating a benchmark is very straightforward. All you need to do is to add the @Benchmark annotation to the method you want to benchmark.

@Benchmark
public void myFirstBenchmark() {
    
}

The @Benchmark annotation tells JMH that this method is a benchmark. The code inside the method will be executed during the benchmark. This is just an empty method for now, but this should be enough to get you started with your own code. The next section shows the different modes for running benchmarks. We will also add some code to this example later on.

Benchmark modes

There are really only four modes you can use to run your benchmarks. These modes are:

  • Average time: Continuously calls the Benchmark methods, counting the average time. The benchmark will run till
  • Single shot time: Used to measure the time of a single call. This is handy for measuring a cold start.
  • Throughput: Counts the total throughput of each worker thread till the iteration time expires.
  • Sample time: Randomly samples the time needed for the call.

You can set the mode using this annotation @BenchmarkMode(Mode.Throughput). The mode you should use depends on what you want to measure. For example, if you want to measure the time needed to execute a single method, you should use the SingleShotTime mode. If you want to measure the throughput of your code, you should use the Throughput mode. If you want to measure the average time needed to execute a method, you should use the AverageTime  mode.

State management

When you write benchmarks, you will probably need some state at a point in time. For example, you might need to have some objects in place for your benchmark to run. If you create these objects during the benchmark, they will be timed as well. To avoid this, you can use the @State annotation and move the initialization of the objects outside the benchmark method to a @Setup method. You can use @State on the benchmark class or on a separate class.

I like to separate the state management from the benchmark class. This way I can reuse the state management for multiple benchmarks, and it makes the benchmark class method more readable. In the following example, you can see the state for a benchmark that is going to sort a given array.

@State(Scope.Thread)
public class BenchState {
  private int[] unsorted;

  @Setup()
  public void setUp() {
    unsorted = new int[]{1,5,7,9,10,6,3,1,8,3,4,6};
  }

  public int[] getUnsorted() {
    return unsorted;
  }

}

The two annotations are @State and @Setup. The @State annotation tells JMH that this class is a state class. The @Setup annotation tells JMH that this method is a setup method. The setup method is called before the benchmark method is called. In this example, we are creating an array with some numbers and storing it in the unsorted variable.

To use this state in a benchmark, you need to add the state-annotated class as a parameter to the benchmark method. As you can see in the following example.

@Benchmark
public void myFirstBenchmark(BenchState benchState) {
    int[] unsorted = benchState.getUnsorted();
    Arrays.sort(unsorted);
}

When you run this benchmark it will sort the array that is stored in the unsorted variable.

Keeping the state correct

In the previous example, there is a bug hiding in plain sight. The unsorted array is only sorted once. The problem is that Arrays.sort() modifies the original array. After the first benchmark iteration, you're no longer sorting random data, but you're sorting an already sorted array, which is much faster and gives you misleading results. To fix this, you can use the unsorted.clone();. Now each benchmark will sort a new array. The downside is that the clone method will be counted towards the benchmark.

Using state to create variants.

If you want to benchmark a lot of different parameters, you can use a @state annotated class to keep track of things. For example, you use a state object to test different inputs or to activate different behavior. In the following example, I use it to test different inputs.

In the following code, I have a @state annotated class with a single value "number". JMH will run a unique benchmark for each value in the param array.

@State(Scope.Benchmark)
public class ExecutionPlan {

    @Param({"0", "1", "2", "3", "4", "5"})
    public int number;
}

The example will make JMH run six different benchmarks. If I add another value like @Param({"true", "false"}) JMH will create 2 * 6 = 12 benchmarks. One for each combination. This works great if you want to test lots of combinations, but the more combinations you have the longer the benchmark will take to run. That is something to keep in mind.

Understanding JMH output

After each benchmark run, JMH will print the results. The output will look something like you can see in the following example. It looks like a table with rows and columns. The first line shows you what each column means. In the first column, you see the name of the benchmark. If you are using @Param the second column will show you the value of the parameter. In the third column, you see the mode that was used. In the fourth column, you see the number of iterations. In the fifth column, you see the score of the benchmark. What this score means depends on the benchmark mode used. In the sixth column, you see the standard deviation of the benchmark.

Benchmark                             (readSize)   Mode  Cnt     Score     Error   Units
b.r.read.RandomReadBenchMark.libUring        512  thrpt    5  1332.440 ± 213.308  ops/ms
b.r.read.RandomReadBenchMark.libUring       4096  thrpt    5  1323.459 ±  93.749  ops/ms

This should help you to understand what the different columns mean and to interpret the results.

Prevent dead code optimizations

To prevent optimizations of unused objects, you can use a black hole. The JVM is very good at optimizing code. If you are creating objects but don't use them, the JVM can optimize this. In your production code, you use all the objects you create so that is also what you want to do in your benchmark. One way to achieve this is to use a black hole. A black hole will fool the JVM into thinking that the object is actually used.

To use a black hole, all you have to do is to add it as a parameter.

@Benchmark()
public void AddingToString(Blackhole blackhole, ExecutionPlan plan){
    var result = "test" + plan.number;
    blackhole.consume(result);
}

After adding it, you can use it to consume objects in your benchmark code.

Constant folding

Constant folding is one of the most common ways the JVM can make your benchmarks lie to you. The JVM is smart enough to evaluate constant expressions at compile time, which means your benchmark might be measuring almost nothing. Here's a simple example that demonstrates the problem:

@Benchmark
public int badMath() {
    return 2 + 2 * 5;  // JVM calculates this as 12 at compile time
}

The JVM sees that this expression will always return 12, so it optimizes the entire method to just return 12. Your benchmark ends up measuring how fast the JVM can return a constant value which is very fast but tells you nothing about the performance of the operation. This becomes more subtle with string operations:

@Benchmark
public String badStringConcat() {
    return "Hello" + " " + "World";  // Becomes "Hello World" at compile time
}

@Benchmark
public String badStringBuilder() {
    StringBuilder sb = new StringBuilder();
    sb.append("Hello");
    sb.append(" ");
    sb.append("World");
    return sb.toString();  // Still optimized because inputs are constants
}

Both of these methods will be heavily optimized because the JVM knows the result ahead of time. To get meaningful results, you need to use variable data:

@State(Scope.Benchmark)
public class ExecutionPlan {
    public String firstWord = "Hello";
    public String secondWord = " ";
    public String thirdWord = "World";
    
    @Setup
    public void setUp() {
        // You could even randomize these values
        firstWord = "Hello" + System.nanoTime() % 2; // Prevents compile-time optimization
    }
}

@Benchmark
public String goodStringConcat(ExecutionPlan plan) {
    return plan.firstWord + plan.secondWord + plan.thirdWord;  // JVM can't pre-calculate this
}

@Benchmark
public String goodStringBuilder(ExecutionPlan plan) {
  StringBuilder sb = new StringBuilder();
  sb.append(plan.firstWord);
  sb.append(plan.secondWord);
  sb.append(plan.thirdWord);
  return sb.toString();  // Still optimized because inputs are constants
}

Running these examples, I got the following results:

Benchmark           Mode  Cnt           Score          Error  Units
badStringBuilder   thrpt    5   133744464.132 ±  1611284.483  ops/s
badStringConcat    thrpt    5  2848111332.093 ± 97370493.057  ops/s
goodStringBuilder  thrpt    5    45398739.848 ±  3852761.651  ops/s
goodStringConcat   thrpt    5    61711604.066 ±   567156.543  ops/s

As you can see, the scores differ a lot between the good and bad examples. This is because of the optimizations happening.

To detect constant folding check if your benchmark results are suspiciously fast or show unrealistic performance improvements, if so you're probably hitting constant folding. The fix is always the same use a variable from a state object that the JVM can't predict at compile time.

Using async profiler with JMH

JMH tells you what is slow, but it doesn't tell you why. That's where the async profiler comes in. Async profiler is a low-overhead sampling profiler that can show you exactly where your application spends its time, down to the method.

The beauty of combining JMH with async profiler is that you get both scoring (from JMH) and deep insights into the call stack (from the profiler). Instead of just knowing that "Method A is 20% slower than Method B," you can see exactly which parts of Method A are causing the slowdown.

Here's how you set it up. First, make sure you have the async profiler library available (see the Dependencies section). Then add the profiler to your JMH options:

Options opt = new OptionsBuilder()
                .include(RandomReadBenchMark.class.getSimpleName())
                .forks(1)
                .addProfiler(AsyncProfiler.class, "lock=1ms simple=true output=flamegraph")
                .shouldDoGC(true)
                .build();

The key parameters I use most often:

  • output=flamegraph Creates an interactive HTML flame graph
  • simple=true Shows simple class names instead of fully qualified names
  • lock=1ms Profiles lock contention (great for finding synchronization bottlenecks)

When you run the benchmark, the async profiler will generate an HTML file that looks like the flame graph in the following image. Let me explain how to read it:

In this real flame graph, you can immediately see the problem: an enormous amount of time is being spent in close() operations. The width of each stack frame represents the percentage of time spent in that method. The wider the frame, the more time it's consuming. Looking at this graph, I can see:

  • The hotspot: Most execution time is in file closing operations
  • The call path: I can trace exactly how we got to these expensive close() calls
  • What to fix: This is clearly where optimization efforts should focus

This is the kind of insight you can't get from JMH alone. JMH might tell you that your file processing benchmark is slow, but the flame graph shows you that the problem isn't reading or processing it's in cleanup operations that you might not have even considered measuring separately.

Bonus: Linux tools

Perf is another great tool if you are working on Linux especially if you are working with native calls using JNI or foreign function API. Like many other tools, it shows you where your application spends most of its time.

You can use Perf like so:

perf record The_Thing_You_Want_To_Record

To see what got recorded you can use perf rapport this will create an overview of where the application spends its time.

Samples: 5M of event 'cycles:P', Event count (approx.): 5804229894135
Overhead  Command          Shared Object         Symbol
  34,50%  bench.random.re  [kernel.kallsyms]     [k] native_queued_spin_lock_slowpath 
   4,20%  bench.random.re  [kernel.kallsyms]     [k] rep_movs_alternative             
   4,14%  bench.random.re  [kernel.kallsyms]     [k] filemap_get_read_batch           
   3,68%  bench.random.re  [kernel.kallsyms]     [k] _copy_to_iter                    
   1,77%  bench.random.re  [kernel.kallsyms]     [k] srso_return_thunk                
   1,62%  bench.random.re  [kernel.kallsyms]     [k] apparmor_file_alloc_security     
   1,48%  bench.random.re  [kernel.kallsyms]     [k] walk_component                   
   1,25%  bench.random.re  [kernel.kallsyms]     [k] srso_safe_ret                    
   1,09%  bench.random.re  [kernel.kallsyms]     [k] memset_orig                      
   1,04%  bench.random.re  [kernel.kallsyms]     [k] link_path_walk.part.0.constprop.0
   1,01%  bench.random.re  [kernel.kallsyms]     [k] filemap_read                     
   1,01%  bench.random.re  [kernel.kallsyms]     [k] locks_remove_posix               
   1,00%  bench.random.re  [kernel.kallsyms]     [k] atime_needs_update

I am working on a file IO tool, and the following tool also comes in quite handy during benchmarking. iostat shows you the utilization of the storage devices in your system. It gives you insight into what each device is doing and all kinds of different stats.

I normally run it like so iostat -x 1 this will keep it printing the stats each second to the console. The output looks as follows:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,28    1,10    0,22    0,00    0,00   98,40

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00    0,00    0,00   0,00
nvme0n1          0,00      0,00     0,00   0,00    0,00     0,00   29,00    168,00     0,00   0,00    0,14     5,79    0,00      0,00     0,00   0,00    0,00     0,00    0,00    0,00    0,00   0,00

As I said, it shows you a lot of stats about the devices and what it is doing, it also shows the CPU usage. All this is to help you get an insight into what the system is doing.

Conclusion

JMH makes performance measurement a lot more exact and less guessing. By handling JVM optimizations, providing scores, and integrating with profiling tools, JMH gives you a lot of reliable insights into your application.

The key takeaways from this post are: always use @State to manage your benchmark data, watch out for dead code elimination and constant folding, and remember to use tools like async profiler to understand where your application actually spends its time. The combination of JMH benchmarks and flame graphs will show you not just that something is slow, but exactly why it's slow.

Start small with a simple benchmark of the code you suspect might be a bottleneck. Once you see the power of JMH, you'll never go back to guessing about performance again. And remember premature optimization is the root of all evil.

The post Benchmarking and profiling Java with JMH appeared first on foojay.

Read the whole story
jhunorss
11 days ago
reply
Share this story
Delete

Battlesmiths: Blade & Forge announced Steam Deck support

1 Share
Battlesmiths: Blade & Forge is a tactical RPG where every piece of gear is forged by you, and there's also some town-building involved too.

alt.

Read the full article on GamingOnLinux.

Read the whole story
jhunorss
11 days ago
reply
Share this story
Delete

8BitDo Pro 3 Bluetooth announced with swappable magnetic ABXY

1 Share
I don't need another controller, at this point I have 7 but another couldn't hurt right? The 8BitDo Pro 3 Bluetooth with swappable magnetic ABXY looks lovely.

alt.

Read the full article on GamingOnLinux.

Read the whole story
jhunorss
11 days ago
reply
Share this story
Delete
Next Page of Stories