Wednesday, 5 July 2017

Transaction synchronization and Spring application events - understanding @TransactionalEventListener

The aim of this article is to explain how @TransactionalEventListener works, how it differs from a simple @EventListener, and finally - what are the threats that we should take into account before using it. Giving a real-life example, I will mainly focus on transaction synchronization issues  not paying too much attention neither to consistency nor application event reliability. A complete SpringBoot project with described examples you will find here.

Example overview


Imagine we have a microservice which manages customers' basic information and triggers activation token generation after a customer is created. From the business perspective token generation is not an integral part of user creation and should be a separate process (this is very important assumption, which I will refer to later). To keep things simple, let's assume that a customer looks like this:
@Entity
public class Customer {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private String email;
    private String token;

    public Customer() {}

    public Customer(String name, String email) {
        this.name = name;
        this.email = email;
    }

    public void activatedWith(String token) {
        this.token = token;
    }

    public boolean hasToken() {
        return !StringUtils.isEmpty(token);
    }

    ...
    //getters
    //equals and hashcode
}

We have a simple spring-data-jpa repository:
public interface CustomerRepository extends JpaRepository<Customer, Long> {}

And below you can see an essence of the business problem - creating new customer (persisting it in DB) and returning it.

@Service
public class CustomerService {

    private final CustomerRepository customerRepository;
    private final ApplicationEventPublisher applicationEventPublisher;

    public CustomerService(CustomerRepository customerRepository, ApplicationEventPublisher applicationEventPublisher) {
        this.customerRepository = customerRepository;
        this.applicationEventPublisher = applicationEventPublisher;
    }

    @Transactional
    public Customer createCustomer(String name, String email) {
        final Customer newCustomer = customerRepository.save(new Customer(name, email));
        final CustomerCreatedEvent event = new CustomerCreatedEvent(newCustomer);
        applicationEventPublisher.publishEvent(event);
        return newCustomer;
    }
}

As you can see, CustomerService depends on two beans:
  1. CustomerRepository - interface for the purpose of saving customer
  2. ApplicationEventPublisher - Spring's super-interface for ApplicationContext, which declares the way of event publishing inside Spring application
Please note the constructor injection. If you are not familiar with this technique or not aware of its benefits relative to field injection, please read this article.

Remember to give -1 during the code review if there is no test included! But wait, take it easy, mine is here:

@SpringBootTest
@RunWith(SpringRunner.class)
public class CustomerServiceTest {

  @Autowired
  private CustomerService customerService;

  @Autowired
  private CustomerRepository customerRepository;

  @Test
  public void shouldPersistCustomer() throws Exception {
    //when
    final Customer returnedCustomer = customerService.createCustomer("Matt", "matt@gmail.com");

    //then
    final Customer persistedCustomer = customerRepository.findOne(returnedCustomer.getId());
    assertEquals("matt@gmail.com", returnedCustomer.getEmail());
    assertEquals("Matt", returnedCustomer.getName());
    assertEquals(returnedCustomer, persistedCustomer);

}

The test does one simple thing - checks whether createCustomer method creates a proper customer. One could say that in this kind of tests I shouldn't pay attention to implementation details (persisting entity through the repository, etc.) and rather put it in some unit test, and I would agree, but let's just leave it to keep the example clearer.

You may ask now, where is the token generation. Well, due to the business case that we are discussing, createCustomer method does not seem to be a good place to put any logic apart from simple creation of user (method name should always reflect its responsibility). In that kind of cases it might be a good idea to use the observer pattern to inform interested parties that a particular event took place. Following this considerations, you can see that we are calling publishEvent method on applicationEventPublisher. We are propagating an event of the following type:

public class CustomerCreatedEvent {

  private final Customer customer;

  public CustomerCreatedEvent(Customer customer) {
    this.customer = customer;
  }

  public Customer getCustomer() {
    return customer;
  }

  ...
  //equals and hashCode
}

Note that it is just a POJO. Since Spring 4.2 we are no longer obliged to extend ApplicationEvent and can publish any object we like instead. Spring wraps it in PayloadApplicationEvent itself.

We do also have an event listener component, like this:
@Component
public class CustomerCreatedEventListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomerCreatedEventListener.class);

    private final TokenGenerator tokenGenerator;

    public CustomerCreatedEventListener(TokenGenerator tokenGenerator) {
        this.tokenGenerator = tokenGenerator;
    }

    @EventListener
    public void processCustomerCreatedEvent(CustomerCreatedEvent event) {
        LOGGER.info("Event received: " + event);
        tokenGenerator.generateToken(event.getCustomer());
    }
}

Before we discuss this listener, let's briefly look at TokenGenerator interface:
public interface TokenGenerator {

    void generateToken(Customer customer);
}


and its implementation:
@Service
public class DefaultTokenGenerator implements TokenGenerator {

    private final CustomerRepository customerRepository;

    public DefaultTokenGenerator(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    @Override
    public void generateToken(Customer customer) {
        final String token = String.valueOf(customer.hashCode());
        customer.activatedWith(token);
        customerRepository.save(customer);
    }
}


We are simply generating a token, setting it as customer's property and updating the entity in database. Good, let's update our test class now, so that it checks not only the customer creation but also token generation.
@SpringBootTest
@RunWith(SpringRunner.class)
public class CustomerServiceTest {

    @Autowired
    private CustomerService customerService;

    @Autowired
    private CustomerRepository customerRepository;

    @Test
    public void shouldPersistCustomerWithToken() throws Exception {
        //when
        final Customer returnedCustomer = customerService.createCustomer("Matt", "matt@gmail.com");

        //then
        final Customer persistedCustomer = customerRepository.findOne(returnedCustomer.getId());
        assertEquals("matt@gmail.com", returnedCustomer.getEmail());
        assertEquals("Matt", returnedCustomer.getName());
        assertTrue(returnedCustomer.hasToken());
        assertEquals(returnedCustomer, persistedCustomer);
    }

}

@EventListener


As you can see, we have moved token generation logic into a separate component which is good (note the assumption at the beginning of the previous chapter), but do we have a real separation of concerns? Nope. @EventListener registers the processCustomerCreatedEvent as the listener of CustomerCreatedEvent but it is called synchronously within the bounds of the same transaction as CustomerService. It means, that if something goes wrong with token generation - customer won't be created. Is this the way it should really work? Surely not. Before we generate and set token, we would rather have a customer already created and saved in database (committed). Now this is the time to introduce @TransactionalEventListener annotation.

@TransactionalEventListener - transaction synchronization

 

@TransactionalEventListener is an @EventListener enhanced with the ability to collaborate with surrounding transaction's phase. We call this a transaction synchronization - in other words it is a way of registering callback methods to be invoked when transaction is being completed. Synchronization is possible within following transaction phases (phase attribute):
  • AFTER_COMMIT (default setting) - specialization of AFTER_COMPLETION, used when transaction has successfully committed
  • AFTER_ROLLBACK - specialization of AFTER_COMPLETION, used when transaction has rolled back
  • AFTER_COMPLETION - used when transaction has completed (regardless the success)
  • BEFORE_COMMIT - used before transaction commit
When there is no transaction running then the method annotated with @TransactionalEventListener won't be executed unless there is a parameter fallbackExecution set to true.

Good, looks like this is something that we are looking for! Let's change @EventListener annotation with @TransactionalEventListener in CustomerCreatedEventListener, then:
@TransactionalEventListener
public void processCustomerCreatedEvent(CustomerCreatedEvent event) {
    LOGGER.info("Event received: " + event);
    tokenGenerator.generateToken(event.getCustomer());
}

We need to check now if everything works as we expect - let's run our test:
 
java.lang.AssertionError: 
Expected :Customer{id=1, name='Matt', email='matt@gmail.com', token='1575323438'}
Actual   :Customer{id=1, name='Matt', email='matt@gmail.com', token='null'}
 
Why is that? What have we missed? I tell you what: we spent too little time on analysing how transaction synchronization works. Now the crucial thing is that we have synchronized token generation with the transaction after it has been committed - so we shouldn't even expect that anything will be committed again! Javadoc for afterCommit method of TransactionSynchronization interface says it clearly:

The transaction will have been committed already, but the transactional resources might still be active and accessible. As a consequence, any data access code triggered at this point will still "participate" in the original transaction, allowing to perform some cleanup (with no commit following anymore!), unless it explicitly declares that it needs to run in a separate transaction. Hence: Use {@code PROPAGATION_REQUIRES_NEW} for any transactional operation that is called from here.

As we have already stated, we need to have a strong separation of concerns between a service call and an event listener logic. This means that we can use the advice given by Spring authors. Let's try annotating a method inside DefaultTokenGenerator:
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void generateToken(Customer customer) {
    final String token = String.valueOf(customer.hashCode());
    customer.activatedWith(token);
    customerRepository.save(customer);
}

If we run our test now, it passes!

Caveat: We are discussing interacting with AFTER_COMMIT phase only, but all these considerations apply to all AFTER_COMPLETION phases. In case of BEFORE_COMMIT phase none of the above problems should worry you, although you need to make a conscious decision whether your listener's code should really be executed in the same transaction.

Asynchronous execution


What if a token generation is a long lasting process? If it is not an essential part of creating a customer, then we could make one step forward and make our @TransactionalEventListener-annotated method asynchronous one (via anotating it with @Async). Asynchronous call means that we will execute listener's processCustomerCreatedEvent in a separate thread. Bearing in mind that a single transaction in Spring framework is by default thread-bounded, we won't need autonomous transaction in DefaultTokenGenerator anymore.

Good, let's write a test for this case:
@SpringBootTest
@RunWith(SpringRunner.class)
@ActiveProfiles("async")
public class CustomerServiceAsyncTest {

  @Autowired
  private CustomerService customerService;

  @Autowired
  private CustomerRepository customerRepository;

  @Test
  public void shouldPersistCustomerWithToken() throws Exception {
      //when
      final Customer returnedCustomer = customerService.createCustomer("Matt", "matt@gmail.com");

      //then
      assertEquals("matt@gmail.com", returnedCustomer.getEmail());
      assertEquals("Matt", returnedCustomer.getName());

      //and
      await().atMost(5, SECONDS)
          .until(() -> customerTokenIsPersisted(returnedCustomer.getId()));
  }

  private boolean customerTokenIsPersisted(Long id) {
      final Customer persistedCustomer = customerRepository.findOne(id);
      return persistedCustomer.hasToken();
  }
}

The only thing that differentiates this test from the previous one is that we are using Awaitility library (a great and powerful tool) in order to await for the async task to complete. We also wrote a simple customerTokenIsPersisted helper method to check if token was properly set. And surely test passes brilliantly!

Caveat: I don't recommend performing any async tasks in BEFORE_COMMIT phase as you won't have any guarantee that they will complete before producer's transaction is committed.

Conclusions

 

@TransactionalEventListener is a great alternative to @EventListener in situations where you need to synchronize with one of transaction phases. You can declare listeners as synchronous or asynchronous. You need to keep in mind that with synchronous call you are by default working within the same transaction as the event producer. In case you synchronize to AFTER_COMPLETION phase (or one of its specializations) you won't be able to persist anything in the database as there won't be commit procedure executed anymore. If you need to commit some changes anyway, you can  declare an autonomous transaction on event listener code. BEFORE_COMMIT phase is much simplier, because commit will be performed after calling event listeners. With asynchronous calls you don't have to worry about declaring autonomous transactions, as Spring's transactions are by default thread-bounded (you will get a new transaction anyway). It is a good idea if you have some long running task to be performed. I suggest using asynchronous tasks only when synchronizing to AFTER_COMPLETION phase or one of its specializations. As long as you don't need to have your event listener's method transactional - described problems shouldn't bother you at all.

Further considerations


In a real-life scenarios numerous other requirements might occur. For example you might need to both persist customer and send an invitation email like it is depicted below:
@Component
public class CustomerCreatedEventListener {

    private final MailingFacade mailingFacade;
    private final CustomerRepository customerRepository;

    public CustomerCreatedEventListener(MailingFacade mailingFacade, CustomerRepository customerRepository) {
        this.mailingFacade = mailingFacade;
        this.customerRepository = customerRepository;
    }

    @EventListener
    public void processCustomerCreatedEvent(CustomerCreatedEvent event) {
        final Customer customer = event.getCustomer();
        // sending invitation email
        mailingFacade.sendInvitation(customer.getEmail());
    }
}

Imagine a situation when an email is sent successfully, but right after it, our database goes down and it is impossible to commit the transaction (persist customer) - thus, we lose consistency! If such situation is acceptable within your business use case, then it is completely fine to leave it as it is, but if not then you have a much more complex problem to solve. I intentionally haven't covered consistency end event reliability issues here. If you want to have a broader picture of how you could possibly deal with such situations, I recommend reading this article.






Sunday, 9 April 2017

Example of multiple login pages with Spring Security and Spring Boot

I just finished preparing a Spring Security configuration for a Zuul proxy in my company when a new requirement in this area came in from the business. We needed to have different login pages for different URLs being accessed within the same application. I'm not a front-end guy, so my first thought was to enhance my existing security config. I found this topic interesting enough to investigate broader possibilities of defining separate security constraints for different URL path patterns. In this article I will describe an example on how to achieve this and how to test your configuration using Spring Boot and Thymeleaf as a templating engine. A complete project you will find here. I will mainly focus on form-based login, but at the end you will also see how to benefit from these examples in order to provide various http security types within a single application. Enjoy!


Imagine we have two home pages, that should be accessible under following paths: /regular/home and /special/home. We would like to have them secured with corresponding login forms: /regular/login, and /special/login. By deafault, Thymeleaf templates are supposed to be located in /templates directory:
├───src
│   ├───main
│   │   ├───java
│   │   │   └───com
│   │   │       └───bslota
│   │   │           │   MultiLoginApplication.java
│   │   │           │
│   │   │           └───config
│   │   │                   MvcConfig.java
│   │   │                   SecurityConfig.java
│   │   │
│   │   └───resources
│   │       │   application.yml
│   │       │
│   │       ├───static
│   │       │   └───css
│   │       │           styles.css
│   │       │
│   │       └───templates
│   │           ├───regular
│   │           │       home.html
│   │           │       login.html
│   │           │
│   │           └───special
│   │                   home.html
│   │                   login.html
│   │
│   └───test
│       └───java
│           └───com
│               └───bslota
│                       WebSecurityTest.java
│
│   .gitignore
│   mvnw
│   mvnw.cmd
│   pom.xml
Now, apart from html templates we need to configure our application, so that it resolves all views properly. To keep it simple, this should be enough for our example:
@Configuration
public class MvcConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addViewControllers(ViewControllerRegistry registry) {
        registry.addViewController("/").setViewName("regular/home");
        registry.addViewController("/regular/home").setViewName("regular/home");
        registry.addViewController("/special/home").setViewName("special/home");
        registry.addViewController("/regular/login").setViewName("regular/login");
        registry.addViewController("/special/login").setViewName("special/login");
    }
}
In order to introduce security into the application, we need to declare a following dependency:

  org.springframework.boot
  spring-boot-starter-security

Hey, hold on, we are not going to do anything without testing (I hope you already have a habit of writing tests, don't you?). In order to test our security configuration, we need this library:

  org.springframework.boot
  spring-boot-starter-test
  test

Right now, what we have out of the box due to having spring-boot-starter-security dependency on classpath are:
  • HTTP-Basic security setup for all endpoints,
  • randomly generated password logged in console during startup for a user named user.
As we mentioned at the beginning, we want to have a form-based login. Let's define our requirements in a couple of simple tests, then:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.MOCK)
@AutoConfigureMockMvc
public class WebSecurityTest {

    @Autowired
    private MockMvc mockMvc;

    @Test
    public void testIfRegularHomePageIsSecured() throws Exception {
        final ResultActions resultActions = mockMvc.perform(get("/regular/home"));
        resultActions
                .andExpect(status().is3xxRedirection())
                .andExpect(redirectedUrl("http://localhost/regular/login"));
    }

    @Test
    @WithMockUser
    public void testIfLoggedUserHasAccessToRegularHomePage() throws Exception {
        final ResultActions resultActions = mockMvc.perform(get("/regular/home"));
        resultActions
                .andExpect(status().isOk())
                .andExpect(view().name("regular/home"));
    }

    @Test
    @WithMockUser
    public void testIfLoggedUserHasAccessToSpecialHomePage() throws Exception {
        final ResultActions resultActions = mockMvc.perform(get("/special/home"));
        resultActions
                .andExpect(status().isOk())
                .andExpect(view().name("special/home"));
    }

}

What you should know from this code is that we are creating a mock servlet environment (webEnvironment = SpringBootTest.WebEnvironment.MOCK) and auto-configuring MockMvc (@AutoConfigureMockMvc) - a neat and powerful tool for web controller testing. With this setup you don't need to build and configure MockMvc on your own - you can simply inject it as a regular bean dependency. Spring Boot never stops fascinating me.
We have three tests written here. The first one says:

given:
  anonymous user
when:
  trying to access /regular/home URL
then:
  I get 302 HTTP response
and:
  I'm redirected to /regular/login page

And the second one:

given:
  a principal with username "user" and password "password"
when:
  trying to access /regular/home URL
then:
  I get 200 HTTP response
and:
  I access regular/home view

The third one is almost the same as the second (the only difference is the url that is being accessed with MockMvc). As long as when-then parts should be rather clear, the given part might be a bit confusing. In the first test we have not defined any security constraints, so the MockMvc call will be be considered as one from anonymous user. In the second test, though, we are using @WithMockUser annotation which emulates a call as if it is performed by authenticated user. The SecurityContext within so-annotated test will contain an implementation of Authentication interface of type UsernamePasswordAuthenticationToken. With this annotation, you can adjust username, roles/authorities and password as well. By default, we will have a principal with following details:
  • username: user
  • password: password
  • roles: USER 
It would be sufficient to stick to defaults here. Okay, we have some tests, but they fail. Sure they do! Here is what we need to do now:
  • Define AuthenticationManager (in-memory one will be enough for this example) and create appropriate user,
  • Configure HttpSecurity so that all resources are secured with form-based login under /regular/login.

 @Configuration
public class SecurityConfig {

    @Configuration
    public static class RegularSecurityConfig extends WebSecurityConfigurerAdapter {

        @Override
        protected void configure(HttpSecurity http) throws Exception {
            //@formatter:off
            http
                .authorizeRequests()
                    .antMatchers("/css/**").permitAll()
                    .anyRequest().authenticated()
                    .and()
                .formLogin()
                    .loginPage("/regular/login")
                    .defaultSuccessUrl("/regular/home")
                    .permitAll()
                    .and()
                .logout()
                    .logoutUrl("/regular/logout")
                    .permitAll();
            //@formatter:on
        }
    }

    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
                .withUser("user")
                .password("password")
                .roles("USER");
    }
}
Good, now within configureGlobal method we configured an in-memory AuthenticationManager and created a user, as desired. We also declared a RegularSecurityConfig class that extends WebSecurityConfigurerAdapter and overrides configure(HttpSecurity) method - this gives us the ability to change default pre-configured security behaviour. Within the configure(HttpSecurity) method, we:
  • decided to authenticate any request (apart from accessing css files - nevermind that part),
  • set login page URL to /regular/login,
  • set default success login URL to /regular/home - the place where users will be directed after authenticating, not having visited a secured page,
  • set logout URL to /regular/logout.
Of course we don't need logout or default success URL details being defined to complete this example but you will appreciate it when you download and run the whole app, I hope. All right, now all tests are passed. What's next? Tests again:
    @Test
    public void testIfSpecialHomePageIsSecured() throws Exception {
        final ResultActions resultActions = mockMvc.perform(get("/special/home"));
        resultActions
                .andExpect(status().is3xxRedirection())
                .andExpect(redirectedUrl("http://localhost/special/login"));
    }
This tests is very similar to the previously written one. We are checking whether anonymous user trying to access /special/home page gets redirected to /special/login. Surely this test won't pass. The remedy is to add a following code into SecurityConfig class:
    @Configuration
    @Order(1)
    public static class SpecialSecurityConfig extends WebSecurityConfigurerAdapter {

        @Override
        protected void configure(HttpSecurity http) throws Exception {
            //@formatter:off
            http
                .antMatcher("/special/**")
                .authorizeRequests()
                    .anyRequest().authenticated()
                    .and()
                .formLogin()
                    .loginPage("/special/login")
                    .defaultSuccessUrl("/special/home")
                    .permitAll()
                    .and()
                .logout()
                    .logoutUrl("/special/logout")
                    .permitAll();
            //@formatter:on
        }
    }

We just created yet another WebSecurityConfigurerAdapter extension. When you look at the body of configure method, then you will see that it is pretty similar to the one that we had defined previously. The difference is that we have an antMatcher, that enables us to filter and apply this settings only for URLs that begin with /special/ prefix. And how does Spring know which configure method (from which WebSecurityConfigurerAdapter extension) should be invoked first? It is because the @Order(1) annotation.
Yep, we got it! We have implemented and tested our Spring MVC configuration with two separate login pages for different URLs being accessed. This example should show you how many possibilitis Spring Security gives you. Do not think that this kind of configuration is limited to form-based login - you could declare HTTP-Basic security for all requests that concerns /special/** URLs as well or set any security constraints you like for whatever path templates.

Tuesday, 28 February 2017

Defining bean dependencies with Java Config in Spring Framework

I found it hard to choose a topic to describe in my first blog post. I wanted it not to be too trivial and too complicated neither. It turned out that there are many basic concepts in Spring Framework that can be confusing. Java-based configuration is one of them. I hear my colleagues asking from time to time about it, and I see numerous questions regarding it on StackOverflow. Nevermind the motivation, below you will find a compact description of how to declare beans with Java Config.
Please note that this post won't cover bean dependencies of different scopes nor discussion on annotations like @Component, @Service, @Repository, etc., that are often a good alternative to described approach. Java Config might seem to be an overkill in many situations but I hope you will find that post useful anyway. Let's go!

Inter-bean reference

Imagine we have a following repository:
public class MyRepository {
  public String findString() {
    return "some-string";
  }
}
and a service, that depends on repository of that type:
public class MyService {
  private final MyRepository myRepository;

  public MyService(MyRepository myRepository) {
    this.myRepository = myRepository;
  }

  public String generateSomeString() {
    return myRepository.findString() + "-from-MyService";
  }
}
First solution is pretty straightforward and it's called inter-bean reference. MyService bean depends on MyRepository. In order to fulfill this dependency, we pass myRepository() method as a constructor parameter (constructor injection).
@Configuration
class MyConfiguration {
    @Bean
    public MyService myService() {
        return new MyService(myRepository());
    }

    @Bean
    public MyRepository myRepository() {
        return new MyRepository();
    }
}
Easy, right? Good, let's complicate it a bit. Imagine we have a following repository, that has a few dependencies - to make it clearer, I will use String fields:
public class MyRepository {
  private final String prefix;
  private final String suffix;

  public MyRepository (String prefix, String suffix) {
    this.prefix = prefix;
    this.suffix = suffix;
  } 

  public String findString() {
    return prefix + "-some-string-" + suffix;
  }
}
MyService stays the same. Now we want to create a singleton bean of type MyRepository, where prefix and suffix are values from external properties file. In order to inject those properties, we will use @Value annotation.
@Configuration
class MyConfiguration {
    @Bean
    public MyService myService() {
        return new MyService(myRepository(null, null));
    }

    @Bean
    public MyRepository myRepository(@Value("${repo.prefix}") String prefix,
                                     @Value("${repo.suffix}") String suffix) {
        return new MyRepository(prefix, suffix);
    }
}
Wait, what did I just do? I did a constructor injection with a method having both params equal to null. Here is how it all works: @Configuration annotation tells Spring that so-annotated class will have one or more bean defined inside. @Bean annotation is a way of telling Spring container to manage the bean returned by so-annotated method. Like in examples above, @Bean methods are generally used inside configuration classes, that are proxied by CGLIB - the result of bean defining method is registered as a Spring bean and each call of this method will return the same bean (as long as it is a singleton of course). Thus, thanks to CGLIB, whatever params (like nulls in the example above) you use while calling such methods you will get the proper bean. Remember also that according to CGLIB usage configuration classes and bean-defining methods must not be private or final.

There is a caveat, though. @Bean annotation might be used not only in @Configuration classes - it can be used in @Component for example. Then we are talking about a so called Lite mode. In this case there are no proxies created, so method invocations are not intercepted and thus interpreted as a typical Java method invocation.

I think that inter-bean reference is a very nice way of defining bean dependencies inside @Configuration class as long as bean-defining method has no parameters. Otherwise, I choose the below one - read on!

Dependency as @Bean method parameter

Okay, and what if you don't like passing methods as arguments? Or you have MyRepository bean defined with @Component annotation or in some other @Configuration class (but of course in the same context)? It is enough to put your dependency as a method parameter (@Autowired is not needed!):
@Configuration
class MyConfiguration {
    @Bean
    public MyService myService(MyRepository myRepository) {
        return new MyService(myRepository);
    }
}
But, hold on - what if I have several MyRepository beans? Aha! Spring firstly looks at the type of parameter. If it finds more than one bean of this type - it tries to inject by name (yes, the name of the parameter must match the bean name):
@Configuration
class MyConfiguration {
    
    @Bean
    public MyRepository myFirstRepository() {
        return new MyRepository("first", "repository");
    }

    //a bean that will be injected by name into myService
    @Bean
    public MyRepository mySecondRepository() {
        return new MyRepository("second", "repository");
    }

    @Bean
    public MyService myService(MyRepository mySecondRepository) {
        return new MyService(myRepository);
    }
}
If you don't want to base on parameter name, you can use @Qualifier annotation as well, and it will take precedence over the parameter name:
@Configuration
class MyConfiguration {
    
    //a bean that will be injected by name into myService
    @Bean
    public MyRepository myFirstRepository() {
        return new MyRepository("first", "repository");
    }

    @Bean
    public MyRepository mySecondRepository() {
        return new MyRepository("second", "repository");
    }

    @Bean
    public MyService myService(@Qualifier("myFirstRepository") MyRepository someRepository) {
        return new MyService(someRepository);
    }
}

@Configuration composition

You may now wonder what to do when you have configuration spread over numerous @Configuration classes and you want to set dependencies among them. Let's consider following classes:
@Configuration
class FirstConfiguration {
    @Bean
    public FirstService firstService() {
        return new FirstService();
    }
}
@Configuration
class SecondConfiguration {
    @Bean
    public SecondService secondService() {
        return new SecondService();
    }
}
Now imagine that SecondService becomes dependent on FirstService. If both configurations are settled in common application context (this is important here!), you can inject the bean like in one of previous examples:
@Configuration
class SecondConfiguration {
    @Bean
    public SecondService secondService(FirstService firstService) {
        return new SecondService(firstService);
    }
}
@Configuration is meta-annotated with @Component, which means that the class will be component scanned, and you can benefit from DI concepts brought in by Spring. It means you can also autowire your bean this way:
@Configuration
class SecondConfiguration {
    @Autowired
    private FirstService firstService;

    @Bean
    public SecondService secondService() {
        return new SecondService(firstService);
    }
}
As I mentioned before, @Configuration is a @Component. Thus, you can inject it as if it is a regular bean, and call its methods in order to retrieve beans (you will inject a CGLIB proxy!).
@Configuration
class SecondConfiguration {
    @Autowired
    private FirstConfiguration firstConfiguration;

    @Bean
    public SecondService secondService() {
        return new SecondService(firstConfiguration.firstService());
    }
}
It's not all. There is yet another Spring mechanism hidden under the @Import annotation:
@Configuration
@Import({FirstConfiguration.class})
class SecondConfiguration {
    ...
}
This looks nice, gives you the ability to relate different configurations not necessarily under the same context scope. In this setup you can use autowiring of beans and the configuration class as well (see two previous code snippets).

Conclusions

Defining bean dependencies is a very basic concept of Spring framework, but as you could see here, there are lots of possibilities to do it, and there are enough approaches to get confused. Which to choose, which is the best, which looks nice, and which is ugly - it depends on the situation and personal (or team) preferences. How do I do it?
  • When I have bean definitions inside one configuration classes, and bean-definig methods have no parameters - I use inter-bean reference, otherwise - I choose passing bean as method parameter. IDE easy navigation is not as vital as good looking code.
  • When I have two @Configuration classes inside a common context - I pass beans as method paremeters or autowire them.
  • When I have two @Configuration classes in separate contexts - I use @Import annotation and I pass beans as method paremeters or autowire them.